Test automation – the bitter truth

Recently, I came across this awesome blog post by Mike Talks where he tells the story of his personal experience with automation and all the false expectations that he and his team had about it, and it is really an awesome post, if you hadn’t read it, go do it now.

And I really got inspired by it, I recently write a little post concerning automation and I figured out I had much more to say. I was really busy doing the “Software testing is not….” series, so I didn’t really feel like going off topic. But now I feel the time came to tell each other some bitter truth about “test automation”. It is bitter, because people don’t like it, they don’t like talking about it, they don’t like promoting automation like this, but, yet it’s the truth.

It all starts with love.

I have to say I love automation, I really do. And the reason why I write this post is, because I love testing and automation and I want to give the best perspective how they can supplement each other. Yes, it is related to saying some uncomfortable truths about it, but it has to be done.

So, I said I love test automation, not because I believe it is the one and only solution to all my testing problems, yet I am sure when I was new I had that delusion about it, probably. The reason I like it, is because I like to code. I like writing cryptic “spells” and see them turn into magic and I really don’t care if it’s a source file, an interpreter or a console, I just love it.

I see a lot of people being excited about automation, but for a different reason – because they see it as the ultimate solution to all testing problems, the silver bullet, the philosopher’s stone or you name it, it seems to them that human testing might be  replaced with automated scripts and everything will be great, we will ship the product in no time and the world will be a better place.

I have to object this. And trust me, this is not a hate post against automation, in fact it is post aiming to justify why automation is useful, but useful in the right way, with the right expectations, not the way that we want it to be. So, in order to achieve this, I will have to write for some bitter truths, about automation.

Bitter truth # 1: It’s not test automation, it is tool assisted testing.

James Bach and Michael Bolton did an awesome job, making the difference between “testing” and “checking” , so I think it will be useless for me to repeat what they already said. If you are  not familiar with their work, I strongly recommend you do check “Testing and checking refined” and their white paper on automation in testing – A Context-Driven Approach to Automation in Testing.

In a nutshell, the testing that we perform as humans is not easy to translate in machine language, we can not expect from a machine to act like a human. So, they make the difference between “checking” – shallow form of testing that could be translated into set of instructions and  expected conditions and “testing” – describing the process of human testing with all its complexity and depth. Therefore, the term “test automation” carries a false meaning – we can not automate testing, we can automate checking and it is more correct to say “automated checking”. Yet, I prefer another  term they used, for the reason it sounds more elegant and provides more information – “tool assisted testing”. Because that’s what we do, when we automate something – we use tools to assist our testing, that’s it, we don’t want to replace it, or get rid of it. We want to use tools to enable us, to do more, rather than to replace the human effort in testing.

Bitter truth # 2: Automation doesn’t decrease cost of testing.

I am really interested how does that equation work?. People claim automation reduce cost of testing. Let’s see.

  • You hire an additional person to deal with building the framework and write the tests. (If you think your existing testers can do it part-time, while doing their day-to-day tasks, my answer is … yeah, right ) so that’s additional cost
  • You probably will spend additional money for the tool licence, or otherwise you will use open source tools which means that guy will have to spend additional man hours in order to make that framework work for your specific testing needs.
  • You will have to pay that guy to write the actual tests, not only the framework, so that’s additional cost.
  • The code that you write isn’t “pixie dust”, it’s not perfect, it turns into one additional code base that has to be taken care of, just as every other production code – maintenance, refactoring, debugging, adding new “features” to it, keep it updated. Guess what, it all will cost you money.
  • And of course, let’s not forget about all the moments of “oh” and “ah” and “what was the guy writing this fucking thinking” that you will run into, related to the framework that you use and its own little bugs and specifics, that will also cost you additional money and time investment.

I think that’s enough. The main reason that people give for the “low-cost” of automation is – it pays back in long-term. Now that’s awesome. And it would be true, if we accept that once written a check is a constant that never changes, that it works perfectly and our application never changes until the end of the world. Well, that would be nice, but there’s two problems:

  1. We all know from our testing experience that things written in code don’t actually work perfect, out of the box. In fact it takes a lot of effort to make them work.
  2. If your code base never changes, if it stays the same, you don’t add anything, you don’t change anything, you don’t redesign anything, it is pretty possible that you are going out of business. We are working in a fast evolving environment and we need to be flexible to changes.

So, again. I don’t get it, how having all of the above leads to cost reduction.

Bitter truth # 3: Testing code isn’t perfect.

I see that very often at conferences and in talks among colleagues. It seems that test automation scripts are some kind of miracle of the nature that never breaks, never has bugs, once you write it and it’s a fairy tale. Well, unfortunately it is not.

As stated above already, it is code, normal code just like any other, it has “its needs” – to be maintained, to be refactored and guess what … to be tested :O Why is nobody talking about testing the automated checks, I mean are these guys testers, did they left their brains in the fridge or something?! It’s code, it has bugs, it has to be tested, so we make sure it checks what we want it to check, wake up! Not only that, I bet you, that doing your automation, you will be dealing with bugs introduced by your checks, much more than bugs introduced by the code of the app that you test.

Bitter truth # 4: Automation in testing is more reliable… it’s also dumb as fuck

One of the many reasons why testers praise automation is – it is more reliable than human testing. It can insure us that every time we execute exactly the same check, with exactly the same start conditions, which is good, at least it sounds good. The truth is, it’s simply dumb and I mean machine-like dumb. It’s an instruction, it simply does what it is told, nothing more, nothing less. Yes, it does make the same thing every time over and over, until you don’t want it do that thing any more. Then it turns into huge pain in the ass, because someone will have to spend the time to go and update the check and make sure it checks the new condition correctly. In other words, automated checks are not very flexible, when it comes to changes and changes are something we see pretty often in our  industry.

Bitter truth # 5: Automated checks don’t eliminate human error

For reasons that I stated above I claim that automated checks can not eliminate human errors. Yes, they eliminate natural variety that human actions have, but that doesn’t mean we don’t introduce many more different new human errors while writing the code. The conclusion is simple, while the code is produced by a human being, it is possible it has errors in it, it’s just life.

Not only that, but having our checks automated, we introduce the possibility for machine error. Yes, our code has all the small bugs and weird behaviors that Java, C#, Python, Php or any other language have. The framework that we use also might have these, the interaction that it has with the infrastructure it runs on, might also introduce errors. So, we must be aware of that.

Bitter truth # 6: Automated checks are not human testing automatically

I see and hear that pretty often, everyone saying – “we know we can’t automate everything” and yet they continue to talk as if they could. Not only that, they talk like if they could mimic exactly the same process that a human is doing. And it is not possible. There is no human activity that is cognitively driven, that includes analysis, experience and experimentation to work together in order to achieve a goal, that could be automated, not now, not with the current technology. Yes, AI and robotics are constantly moving forward, if in some beautiful day that happens, I would love to see it. Until then, human testing can not be automated, at least not the one I understand as high quality human testing, what we can automate, though, is shallow testing and act like it will do the job, just right.

Conclusion

Again, this is not a hate post. It is informative post, informative of the risks that automation has. Clever testers know these risks and base their testing on that knowledge. And yet there’s a lot of people running excited from conference to conference, explaining how automation is the silver bullet in software testing and it will do magic for your testing. This is wrong.

Tool assisted testing is useful, it is very useful. And it is fun to do, but we have to use it in the right way – as a tool, as an extension to our testing abilities, not like if it’s their replacement. We should know why and when it works, how it works and what traps it might have for us. In other words, when we want to use it, we should know why and if we are using it in the right way.

And most important, human and machine testing, simply don’t mix. One can assist the other, but they are not interchangeable.

Hope you liked the post. If you did, I would appreciate your shares and retweets. If you didn’t, I would love to see your opinion in the comments. Thanks for reading 🙂 Good luck.

WordPress docker container on your local PC.

wordpress docker container
Image by: http://blog.loadimpact.com/wp-content/uploads/2014/12/WordPress-Docker.png

I have been thinking this for a long time, but I didn’t have the time to realize it, but now I am done with it and I wanted to share my progress. Since the Linux containers is the next huge thing in administering test environments I really wanted to make a copy my blog locally and I wanted to use a wordpress docker container for that purpose.

In order to do this we are going to need a running wordpress site – hosted somewhere, a docker installation and a WP plugin called Duplicator.

Here’s the small plan of actions to do it:

  • Installing docker.
  • Making a package of your existing WP installation via Duplicator.  
  • Move the package to the wordpress docker container.
  • Run the installer. 
  • Fix what you’ve just fucked up. 

What is docker container in first place.

I see that there’s lot of confusion on what a container is, so I will try to explain it in the way that I understand it. Think of a container as of an extremely trimmed and minimized version of a Linux OS, running only the utilities that you need. Practically, it comes to serve in the same manner as virtualization would do, but having a lot of advantages over it – no hypervisor, no boot times for the OS, the component you run in a container has all you need and nothing more. Docker offers all these pre-compiled docker images for almost everything you might need, including wordpress.

Installing Docker.

I am not going to explain this, since it’s more of a docker related info. I would recommend you to visit Docker’s official page where there’s a short tutorial on how to do that including a lot of documentation for using docker.

Note: I am using Linux (Lubuntu) as host for the Docker container.

Running a wordpress docker container.

I had some confusion with this, in fact I thought every time I want to have to container up, I have to run it. Which is incorrect so here’s a small walk through in container management.

You run the commands in exactly the same order and this should start a running wordpress docker container on 127.0.0.1:8282. You might want to ensure everything is running smoothly, just check the log, if mySql was installed w/o errors and Apache is running, it’s all OK, for now.

Once you have your container running you can bring it up/down and restart it with one of the following commands:

If you have any more questions, I used this article as guide for doing it.

Creating a package of your existing site.

For this purpose we’ll use the plugin called Duplicator. Duplicator is a plugin that let’s you to make a backup of your site with all its existing assets – plugins, database, media, themes, etc and deploy it on an empty machine via install.php script that the plugin creates. I repeat empty machine, because I am almost sure you don’t need wordpress installed on your new host, it will be created by the script. Anyway, I would risk deploying it on an empty Ubuntu server for example since I am not quite sure if it will install the MySql, Apache and so on. That’s why, we use already existing infrastructure of a wordpress and we’ll just deploy the new site there.

So, on the plugin. When installed it will appear in your menu and when opened you will be prompted to create your first package. The screens are pretty straight forward so just navigate through them and when done you will have two files that you will need – installer.php file and an archive.

duplicator

Now you need to download both these locally and put them into the wordpress docker container. Let’s say its name is “freddy”. using this installation, all your wordpress related files are stored in /app folder and that’s the place you need these two as well. You can easily do this by executing the following command.

Once you have these, you have these don’t be to quick on running the installer ’cause you have couple more thing to do. First, make sure the installer.php file has executable rights. To me, it wasn’t a problem to give it 777 since it’s a local machine and I will only use it for testing purposes, but in your case you might need to consider something else. The second thing, we need to do is to eighter delete the wp-config.php or the whole directory content, besides the installer and the archive. Other wise the installer will complain cause “there’s another wp installation there already and bla bla bla”. To me, deleting the wp-config file worked well enough. So, executing commands to a container might be done in two ways.

First, you can execute command once via the exec command like this:

Or you could sort of log in or attach to the container:

And you can directly navigate within your container and do the rest:

That should do. Now, you can go to 127.0.0.1:8282/installer.php and run the installer. In order to proceed you will need to kknow the instance of your database, pass and user. If you don’t know them you can simply cat the wp-config.php(before deleting it) and see in there.

Fixing what we’ve fucked up.

Normally when you finish the installation, there’s couple of thing you’ll have to do.

The plugin itself will ask you to update your symlinks, which is good as well as deleting the installer and the package, to avoid security breaches.

In my case, the site loaded all well, but some of my plugins we’re broken. For example – w3cache got completely messed up and all my styles were fucked, I had to remove it in order to have the site looking normally. Also, Crayon Syntax Highlighter got completely broken, so I removed it, too. So, these are kind of expected things that might get broken, after you move the site. Of course I don’t think they are something serious and are probably fixable by cleaning their cache, but for my purposes that was just useless.

So, I hope this post was helpful and interesting, if you liked it don’t forget to comment and share it with your friends. Good luck 🙂

Learning Linux as a tester.

Learning Linux- may the source be with you
Source of the image: www.geekstir.com

I’ve been a Linux user for a while and I’ve been a tester for a while and I always thought that the two are basically connected and since I am mostly testing in Linux environment right now I decided to state publicly my opinion on why learning Linux is vital for testers and how they could benefit from this.

Why is learning Linux is important for you as a tester?

We all switch positions occasionally and when we find a suitable offer in technical skills part it says something like that:

  • Good knowledge of network protocols (TCP/IP, ICMP, UDP, etc)
  • Good knowledge of at least one scripting language is a plus (Python, Perl, Ruby)
  • Knowledge in OS – Windows or Unix

Now, I know that you’re probably noticing that only the last one mentions Linux/Unix, so what does the other two have to do with learning Linux. Well, that’s the coolest part, by learning Linux, you are actually developing your skills and knowledge as a tester. Here’s why:

  • Linux is an open source OS and allows you to poke as deeper as you wish (normally until you break stuff so bad, the only way to fix it is re-install), which is a need rooted in our testing mindset – to explore things and look how deep we could inspect them. So, in the context of network protocols, you have plenty of ways and tools in Linux to learn and have a “hands on” experience and by that improve your knowledge on a topic that’s vital for testers and IT professionals in general.
  • Learning Linux, you are practically learning scripting language on the fly, because the “language” of commands you are using in terminals and consoles is a script language itself (bash, zsh, ksh, etc). It all looks like you are making yourself a great favour by learning Linux, but most importantly…

Open code means an open mind.

It’s a long topic and I won’t get too deep in it, but there’s a great valuable lesson in using Linux and learning Linux and it’s in the power of community. It opens you to the world of open source software which is huge and profound and complex … and annoying sometimes, but it could always teach you something. Many important testing tools are open source as well – Selenium for example.

I believe that the most valuable lesson that you could gain by learning Linux is how to figure out solutions for your problems on your own. And I think that’s a skill that’s really valuable in software industry at all. Because let’s be honest, when you are a rookie, dealing with Linux is such a pain in the ass. I mean, you come from Windows where everything is restricted and polite and gives you alert messages and you step into a totally non-dummy proof OS where you could practically mess up stuff so bad, that you will wonder what the fuck happened.

Then you have the Linux community, which is mostly consisted of trolling geeks that will answer to your problem with – “READ THE FUCKING MAN PAGE !!!”, like you know wtf a man page is. No, there’s cool and helpful guys, of course, but all that has its own charm, because it drives you to be more independent, to act, to search, to explore, to investigate. All these skills are really important for a tester, because we all know that if we bump into an issue, we could ask a senior or a lead for advice, but that doesn’t really develop our skills very good, on the other hand dealing with problem, finding solutions, sharpens our skills just like grinder sharpens a sword.

Natural drive to explore.

It’s a concept that was many times covered by Richard Stallman in his talks and lectures – the freedom to explore and change the insides of a tool or operating system you are using. This is not only essential for your career as an engineer, but as a tester, too. It is our distinct skill to be able to try to disassemble stuff, bring back components in different order and then fix them. We are, as everyone in technology, scientists and in order to develop, we must experiment. The best way to experiment is by learning what you are dealing with, it’s functions and components, if we don’t count reverse engineering, of course.

O, Kali.

Another great reason to learn Linux, besides the fact that 90% of the servers in the world are running it, is one really awesome skill set, that’s highly valued and really rare in software testing and this is security and penetration testing. And to cover these, there’s a great Linux distribution called Kali Linux, you can find it here: https://www.kali.org/

Basically Kali is a fully functional Linux distribution, which is loaded with any possible security, penetration, packet sniffing or social engineering tool you could think of. Of course, because security testing is so bad ass, this doesn’t come packed with a ribbon and with a nice and shiny UI, oh no. Almost any tool in Kali is a command line tool, meaning you will have to learn scripting, writing specific commands, etc. But this is definitely something that pays off, and the more you dig in it, the more tech savvy you become.

Here’s a small intro to Kali, including setup and installation on Vbox, by a fellow blogger. And here is a whole blog, dedicated to Kali Linux, for the more advanced users.

How to start learning Linux.

And after we concluded, that learning Linux is important, there comes the big question – how do we start? Well, starting is easy, just download it install it on a VM or as a live CD and give it a try. And here I hear people screaming, “but how, it’s complicated, it’s hard to use, you have to know the commands, only admins can use it…”. This is in the past, Linux nowadays has nice user interface, easy installer (Next, Next, Ok, Finish) and many other Windows like features. The thing is, sooner you forget about them – the better. Because when you have to use Linux to log into a server, there won’t be a GUI, so get used to the console, it’s nice and easy, just a bit weird in the beginning.

So, what distribution should you pick? Well, there are thousands of them, but here are my suggestions:

  • Easy to install and use – junior level – Ubuntu, Mint, CentOS – these are really user-friendly and meant to be easy to use.
  • A bit more complex – intermediate level – Fedora, Debian – these are, let’s say, the above’s big brothers, it might be a better idea to dive into them if you have previous Linux experience.
  • The elders – senior level – Arch, Slackware – these are intentionally hard to use, at least to a novice user. The reason is, they want to keep things as Unix-like as possible and as simplistic as possible. You have a lot to learn from them, because you have to do almost everything manually – disk partitioning, configuring interfaces, installing GUI, if you want one, etc. Definitely a must see, but you need to arm yourself with a lot of patience and knowledge.

Here are some resources for the novice Linux-ers:

In this blog topic I wrote about edx.org‘s course, I’ve finished it, it’s awesome and a great beginning for total noobs in Linux.

If you like a more visual tutorials – Eli The Computer Guy is your man. This guy is amazing, really good at explaining stuff in a simplistic and understandable manner, obviously knowledgeable in various OSs and networking, protocols, programming and I don’t know what more, but he has a whole section on Linux, with nice tutorials.

Nixcraft, is a great blog, where almost every problem you search for has solution, with really cool examples and explanations.

Of course, there’s a lot of paid courses and programs to learn and certify about Linux, but I am just trying to give an introduction including free and accessible materials here.

So, don’t forget to comment and share with your friends… and don’t forget to use Linux – open source, means an open mind. Good luck! 🙂

 

Linux foundation starts new free course along with EDX

Linux foundation and EDX are releasing the course called “Introduction to Linux”. images

The course was paid until now and this is it’s first start as a part of the EDX and it will be totally free.

According to the description the course will start in the third quater of the year and will cover basic knowledge over the three main OS Linux families.

The goals of the course will be:

  • Indtroduction to Linux’s three biggest OS families.
  • Good knowledge and ability to interact with Linux from both graphical and command line perspective.
  • Various techniques commonly used by Linux devs and admins.

The course will be mentored by Jerry Cooperstein – a PhD working in Linux since 1994, developing and delivering training in both kernel and user space.

If you would like to register for free, you could do it here: Introduction to Linux

What’s Linux:

Linux is light weight open source operating system, commonly used in big scientific projects, servers and complicated software architectures. Linux is in the hearth of one of the most popular mobile operating systems – Android.
I have a lot of reasons to beleive that Linux will reach even further as one of the main competitors in the game industry since the release of the Steam machine and Steam OS – a fork of the Debian distribution.

Good luck guys and thank you for the great intiative.

Lubuntu 13.10 reduce brightness issue

Since the last version of Ubuntu – 13.10 – “saucy salamander”, I’ve been experiencing several issues, apps stopped working, crashes etc. This is kind of normal stuff, with each new release of an open source distribution, there’s a period in which you have to be patient to have the communities release some patches of what got broken. To me I think the worst was screen brightness, I am still quite new to Ubuntu, I’ve been using Lubuntu for a month or so, but I had no luck with finding some GUI or terminal tool to reduce brightness using the open source amd/ati driver, and believe me it was burning my eyes, literally.

Occasionally my function keys were working and I was able to control the brightness, but not for too long, as I said it was on occasion. So, I had 2 choices – to sell my soul to the devil and install the proprietary driver, which by the way SUX, or find myself a solution.
I started looking for a solution and I can’t say it was easy. That’s why I decided to share the solution here, and save some poor user from buying new glasses.

The solution for having your screen brightness reduced, so you don’t get sun burn was pretty simple to me, hope it works for anyone who needs it.
Here what you have to do: open terminal (Ctrl + Alt +T) and execute the following command:


nano /etc/rc.local

The important part here is: nano is a text editor, we use it to edit the file, and /etc/ rc.local is a file specific for all Debian based distributions which runs at the end of all multi-user boot levels, which makes it pretty easy to put stuff in it. By opening the file we have the following output:

#!/bin/sh -e
#
# rc.local
#
# This script is executed at the end of each multiuser runlevel.
# Make sure that the script will "exit 0" on success or any other
# value on error.
#
# In order to enable or disable this script just change the execution
# bits.
#
# By default this script does nothing.
exit 0

As it says at the end, by the way – it does nothing.
First we need to comment the “exit0” statement and add the magic line:

echo * > /sys/class/backlight/acpi_video0/brightness

Where * is a number from 0 to 10, standing for the level of brightness you desire, in my case 8. After the changes the file should look like this:

#!/bin/sh -e
#
# rc.local
#
# This script is executed at the end of each multiuser runlevel.
# Make sure that the script will "exit 0" on success or any other
# value on error.
#
# In order to enable or disable this script just change the execution
# bits.
#
# By default this script does nothing.
#exit 0
echo 8 > /sys/class/backlight/acpi_video0/brightness

Save it, reboot your system and voila!
Hope that will help anyone, good luck 😉

Creating RAM disc – Linux

Since it happened to me to have this task as my homework assignment in the Telerik’s software academy Linux administration course, I decided to have this valuable knowledge shared here.

For starters lets first define RAM disc: basically we might refer to this as a RAM memory turned into a storage device instead  of random access memory.

What are the pros and cons: If we have to point the good sides of RAM memory when used as storage device there’s a point that always matters: Performance. Yes, RAM is the fastest performing memory if you need to read fast and write fast from your discs, and by fast I mean way faster than any HDD configuration, even SSD.

Here’s a small figure showing some numbers:

RAMdisc

Obviously results speak for themselves.

Bad sides: Well as we all know life’s unfair and every performance gain we have, comes on a high price, including RAM discs, too. Being a volatile memory, RAM will be completely erased and all your data – lost, if your computer powers off. So be careful keep that in mind and be prepared to store all your valuable data on your hard drives if you have plans to power off.

Here’s some quick steps to create a RAM disc on Linux.

First you need a spare RAM, that’s for sure and as you can guess if are going to make the RAM disc bigger than 4 GB you will need 64 bit OS.  You can easily check the amount of free memory with the ‘free’ command with parameters ‘-m’ for megabytes or ‘-h’ for human readable format.

Lets say we want to create a 2 GB RAM disk. We first witch to Root:

[code language=”bash”]
$ su
# mkdir /tmp/ramdisk; chmod 777 /tmp/ramdisk
# mount -t tmpfs -o size=2048M tmpfs /tmp/ramdisk/
[/code]

 

2048 here stands for the size of RAM we want to relocate as RAM disc. Needless to say, the amount of RAM should be less than your free RAM, otherwise you will include swap memory, and that will be the end of your performance.

As a conclusion we might say using a RAM disc is really a good choice when it comes to performance and low cost but we need to be careful about what data we put in there if our machine is not powered on 24/7.

Инсталиране на Virtual Box под Linux, практически tutorial

Без да имам претенции да съм true master във виртуализацията, реших да кача едно кратко предимно картинно туториалче затова как се инсталира Virtual Box  и виртуална машина на него, под Linux. За целта ползваме host Fedora 18 x86-64, а на виртуалката ще качим Cent  OS 6.

Започваме със стандартното преминаване към root с командата su (switch user):

Изображение

След като въведем паролата за root юзъра, трябва да свалим Virtual Box (нашия hypervisor), на който ще направим виртуалната машина. За целта ползваме следната команда:

yum install VirtualBox

(Не се импортва никакво repository, инсталацията е напълно достъпна и така)

Изображение

След като инсталираме успешно програмата, можем да се върнем в стандартен юзърски режим като напишем “exit”. Продължаваме като стартираме hypervisor-a. Тъй като няма заредени виртуални машини, ще съсдадем нова, избираме бутона “New”.

Изображение

Wizard-ът, който се извиква е напълно разбираем и достъпен дори и за технически неграмотен или полуграмотен юзър, така че дори само натискането на Next би трябвало да даде резултат. Първо, въвеждаме името на виртуалната машина и операционната система.

Изображение

След него следва прозорец, за избор на количеството RAM памет, което желаем да заделим за виртуалната машина. Съветът на вендора тук е да не алокирате повече от 50 % от наличната RAM, но всеки може да прецени сам за себе си.

Изображение

Следва създаването на хард диск, тук има малка особеност, може да се избере размерът на диска да е fixed – работи по-бързо, създава се малко по-бавно и алокира място върху host машината за постоянно, или dynamic, който е малко по бавен, но за сметка на това алокира пространството, едва когато се напълни до максималния си size. За мен това върши идеална работа, тъй като не искам да губя място на host машоината, а и ползвам виртуалката само, за да чупя разни неща по нея.

Изображение

Типа hard drive, лично мен не ме касае, така че го оставям unchanged от default.

Изображение

Разбираемо, за харда също трябва да определим размер по свой вкус и нужди, както и име, с което ще го кръстим.

Изображение

След като изберем “Create”, имаме вече готовата конфигурация. Имаме възможност да попипнем някои компоненти, като да речем колко ядра да е процесора, как да се държи видео картата, но това е optional, видим ли тази картинка, имаме си вече виртуална машина.

Изображение

Естествено, след стартиране на машината установяваме, че тя е празна, както погледа, с който я гледаме, затова е време да се захванем за работа и да си намерим операционна система. Аз лично препоръчвам за целта да си свалите image файл, iso, което да е и live CD, за да ползваме по- user friendly интерфейса. За да импортнем iso-то във виртуалната машина отваряме setings>storage и избираме да вкараме image файл в празното виртуално CD, чрез натискане върху иконката с дискчето вдясно.

Изображение

След импортването на CD-то, животът става една идея по-лесен, може да се наложи да натиснете F12 и да укажете на виртуалната машина да boot-не от диска, но това не е сложно. Появява се прозорец, в който можем да изберем дали да инсталираме или само да тестваме ОС, естествено избираме инсталация.

Изображение

Инсталационния процес е стандартен за всяка Linux ОС, така че няма да се спирам на него, избира за парола за root, час и дата, общо взето стандартни неща. След като системата изцикли и и подкараме интернет, вече всичко е наред.

Изображение

Няколко малки забележки, касаещи Virtual Box:

  • Ако имате проблеми с образа и лагване при работа с виртуалката, може би не сте активирали BIOS-ната опция за виртуализация на Вашата машина, това оикновено е VT екстеншън. Дори и да не ползвате виртуализация, не пречи ако е активирана.
  • Друг гаден проблем с Virtual Box под Linux e, че модулите са kernel specific, т.е. при всеки upgrade на kernel-а ще трябва да си сваляте нов модул, да си го инсталирате и да си го активирате, малко е досадно, но се преживява, става със следните, команди, като root:

uname -r : за да разберете коя е текущата версия на kernel-a, примерен резултат –       3.8.5-201.fc18.x86_64

yum search kmod-virtualbox – намира, модулите като ги подрежда по версия не kernel-a.

Избираме този, който ни трябва и го инсталираме:

yum install kmod-VirtualBox-3.8.5-201.fc18.x86_64.x86_64

След като се инсталира, го активираме ръчно, командата е:

/etc/sysconfig/modules/VirtualBox.modules

След това вече можем да стартираме Virtual Box спокойно.

Успех ще се радвам да споделите Вашия опит.