Tor browser proxy for specific country.

Tor browser logo
Image from:

Recently, I had a task for testing some country specific features and I found out that there’s a limited amount of free services to set up a proxy for a predefined country. Mostly, people use proxies, because they don’t want to have their IPs from a specific country, due to country regulations and so on, but having an IP from specific country is a bit reverse logic, but as testers we have to do stuff like this, sometimes. So, one way to do so, of course having limitations on its own, is using Tor browser.

What’s Tor and Tor browser in first place.

I am not pretending to be extremely familiar with Tor and all it specifics, but on a pretty high level Tor is a network of proxy nodes. It has a couple of different node types – relay, bridge and the Tor browser itself. What it does, simply said, is to bounce your connection between many such “nodes” in the Tor network, to hide the origin of the connection e.g. make it anonymous. There’s a lot of talk and disputes, whether or not Tor really provides internet anonymity or it’s a dud, if it is reliable and so on, but for the purpose of this blog post, I won’t dive deeper into details about Tor. The reason why I am using it – it works as proxy and we can manipulate it in order to perform some localization testing.

Downloading Tor browser.

You can download Tor browser from its official page – Downloads. The installation is pretty straight forward, for Windows there’s a .exe file you can install and on Linux there’s a .gz archive you can download and place in whatever directory you need it. I will be mainly focused on Linux, since that’s what I used, but things shouldn’t differ in the other platforms as well.

Starting Tor browser.

That’s fairly simple as well, in the directory where you downloaded it, there.s a .desktop file which invokes a run script, so you can simply double-click it, or just run it as a shell script with “./”. Now, if you want to see what your address is you could navigate to: and check your address. With each restart of the browser you will find, that a random IP address is assigned. But there’s a trick to use, to make our IP-based on a specific location.

Manipulating country settings.

For that purpose you will need to navigate to the following folder in your Tor browser installation folder:

~/tor-browser_en-US/Browser/TorBrowser/Data/Tor and what you need is the file called torcc. When you open it you will see something like that:

Let’s say we want to set our IP to UK. For that purpose we need to add the following two lines:

We can practically put any country code in the brackets in order to make it work, for a specific country. Save the file, start Tor browser again and check your IP again. It’s set to UK, voila!

Known issue:

Since the Tor network and Tor browser are not proprietary software and the Tor nodes are simply machines set up by hobbyists. We can’t rely any location will be available to set our proxy to. The network needs some minimum amount of nodes to the desired location in order to establish a full circuit. So, for some countries you might not be able to connect.

So, hope that article was interesting and informative for you, if you liked it don’t forget to comment and share. Thanks 😉

WordPress docker container on your local PC.

wordpress docker container
Image by:

I have been thinking this for a long time, but I didn’t have the time to realize it, but now I am done with it and I wanted to share my progress. Since the Linux containers is the next huge thing in administering test environments I really wanted to make a copy my blog locally and I wanted to use a wordpress docker container for that purpose.

In order to do this we are going to need a running wordpress site – hosted somewhere, a docker installation and a WP plugin called Duplicator.

Here’s the small plan of actions to do it:

  • Installing docker.
  • Making a package of your existing WP installation via Duplicator.  
  • Move the package to the wordpress docker container.
  • Run the installer. 
  • Fix what you’ve just fucked up. 

What is docker container in first place.

I see that there’s lot of confusion on what a container is, so I will try to explain it in the way that I understand it. Think of a container as of an extremely trimmed and minimized version of a Linux OS, running only the utilities that you need. Practically, it comes to serve in the same manner as virtualization would do, but having a lot of advantages over it – no hypervisor, no boot times for the OS, the component you run in a container has all you need and nothing more. Docker offers all these pre-compiled docker images for almost everything you might need, including wordpress.

Installing Docker.

I am not going to explain this, since it’s more of a docker related info. I would recommend you to visit Docker’s official page where there’s a short tutorial on how to do that including a lot of documentation for using docker.

Note: I am using Linux (Lubuntu) as host for the Docker container.

Running a wordpress docker container.

I had some confusion with this, in fact I thought every time I want to have to container up, I have to run it. Which is incorrect so here’s a small walk through in container management.

You run the commands in exactly the same order and this should start a running wordpress docker container on You might want to ensure everything is running smoothly, just check the log, if mySql was installed w/o errors and Apache is running, it’s all OK, for now.

Once you have your container running you can bring it up/down and restart it with one of the following commands:

If you have any more questions, I used this article as guide for doing it.

Creating a package of your existing site.

For this purpose we’ll use the plugin called Duplicator. Duplicator is a plugin that let’s you to make a backup of your site with all its existing assets – plugins, database, media, themes, etc and deploy it on an empty machine via install.php script that the plugin creates. I repeat empty machine, because I am almost sure you don’t need wordpress installed on your new host, it will be created by the script. Anyway, I would risk deploying it on an empty Ubuntu server for example since I am not quite sure if it will install the MySql, Apache and so on. That’s why, we use already existing infrastructure of a wordpress and we’ll just deploy the new site there.

So, on the plugin. When installed it will appear in your menu and when opened you will be prompted to create your first package. The screens are pretty straight forward so just navigate through them and when done you will have two files that you will need – installer.php file and an archive.


Now you need to download both these locally and put them into the wordpress docker container. Let’s say its name is “freddy”. using this installation, all your wordpress related files are stored in /app folder and that’s the place you need these two as well. You can easily do this by executing the following command.

Once you have these, you have these don’t be to quick on running the installer ’cause you have couple more thing to do. First, make sure the installer.php file has executable rights. To me, it wasn’t a problem to give it 777 since it’s a local machine and I will only use it for testing purposes, but in your case you might need to consider something else. The second thing, we need to do is to eighter delete the wp-config.php or the whole directory content, besides the installer and the archive. Other wise the installer will complain cause “there’s another wp installation there already and bla bla bla”. To me, deleting the wp-config file worked well enough. So, executing commands to a container might be done in two ways.

First, you can execute command once via the exec command like this:

Or you could sort of log in or attach to the container:

And you can directly navigate within your container and do the rest:

That should do. Now, you can go to and run the installer. In order to proceed you will need to kknow the instance of your database, pass and user. If you don’t know them you can simply cat the wp-config.php(before deleting it) and see in there.

Fixing what we’ve fucked up.

Normally when you finish the installation, there’s couple of thing you’ll have to do.

The plugin itself will ask you to update your symlinks, which is good as well as deleting the installer and the package, to avoid security breaches.

In my case, the site loaded all well, but some of my plugins we’re broken. For example – w3cache got completely messed up and all my styles were fucked, I had to remove it in order to have the site looking normally. Also, Crayon Syntax Highlighter got completely broken, so I removed it, too. So, these are kind of expected things that might get broken, after you move the site. Of course I don’t think they are something serious and are probably fixable by cleaning their cache, but for my purposes that was just useless.

So, I hope this post was helpful and interesting, if you liked it don’t forget to comment and share it with your friends. Good luck 🙂

Learning Linux as a tester.

Learning Linux- may the source be with you
Source of the image:

I’ve been a Linux user for a while and I’ve been a tester for a while and I always thought that the two are basically connected and since I am mostly testing in Linux environment right now I decided to state publicly my opinion on why learning Linux is vital for testers and how they could benefit from this.

Why is learning Linux is important for you as a tester?

We all switch positions occasionally and when we find a suitable offer in technical skills part it says something like that:

  • Good knowledge of network protocols (TCP/IP, ICMP, UDP, etc)
  • Good knowledge of at least one scripting language is a plus (Python, Perl, Ruby)
  • Knowledge in OS – Windows or Unix

Now, I know that you’re probably noticing that only the last one mentions Linux/Unix, so what does the other two have to do with learning Linux. Well, that’s the coolest part, by learning Linux, you are actually developing your skills and knowledge as a tester. Here’s why:

  • Linux is an open source OS and allows you to poke as deeper as you wish (normally until you break stuff so bad, the only way to fix it is re-install), which is a need rooted in our testing mindset – to explore things and look how deep we could inspect them. So, in the context of network protocols, you have plenty of ways and tools in Linux to learn and have a “hands on” experience and by that improve your knowledge on a topic that’s vital for testers and IT professionals in general.
  • Learning Linux, you are practically learning scripting language on the fly, because the “language” of commands you are using in terminals and consoles is a script language itself (bash, zsh, ksh, etc). It all looks like you are making yourself a great favour by learning Linux, but most importantly…

Open code means an open mind.

It’s a long topic and I won’t get too deep in it, but there’s a great valuable lesson in using Linux and learning Linux and it’s in the power of community. It opens you to the world of open source software which is huge and profound and complex … and annoying sometimes, but it could always teach you something. Many important testing tools are open source as well – Selenium for example.

I believe that the most valuable lesson that you could gain by learning Linux is how to figure out solutions for your problems on your own. And I think that’s a skill that’s really valuable in software industry at all. Because let’s be honest, when you are a rookie, dealing with Linux is such a pain in the ass. I mean, you come from Windows where everything is restricted and polite and gives you alert messages and you step into a totally non-dummy proof OS where you could practically mess up stuff so bad, that you will wonder what the fuck happened.

Then you have the Linux community, which is mostly consisted of trolling geeks that will answer to your problem with – “READ THE FUCKING MAN PAGE !!!”, like you know wtf a man page is. No, there’s cool and helpful guys, of course, but all that has its own charm, because it drives you to be more independent, to act, to search, to explore, to investigate. All these skills are really important for a tester, because we all know that if we bump into an issue, we could ask a senior or a lead for advice, but that doesn’t really develop our skills very good, on the other hand dealing with problem, finding solutions, sharpens our skills just like grinder sharpens a sword.

Natural drive to explore.

It’s a concept that was many times covered by Richard Stallman in his talks and lectures – the freedom to explore and change the insides of a tool or operating system you are using. This is not only essential for your career as an engineer, but as a tester, too. It is our distinct skill to be able to try to disassemble stuff, bring back components in different order and then fix them. We are, as everyone in technology, scientists and in order to develop, we must experiment. The best way to experiment is by learning what you are dealing with, it’s functions and components, if we don’t count reverse engineering, of course.

O, Kali.

Another great reason to learn Linux, besides the fact that 90% of the servers in the world are running it, is one really awesome skill set, that’s highly valued and really rare in software testing and this is security and penetration testing. And to cover these, there’s a great Linux distribution called Kali Linux, you can find it here:

Basically Kali is a fully functional Linux distribution, which is loaded with any possible security, penetration, packet sniffing or social engineering tool you could think of. Of course, because security testing is so bad ass, this doesn’t come packed with a ribbon and with a nice and shiny UI, oh no. Almost any tool in Kali is a command line tool, meaning you will have to learn scripting, writing specific commands, etc. But this is definitely something that pays off, and the more you dig in it, the more tech savvy you become.

Here’s a small intro to Kali, including setup and installation on Vbox, by a fellow blogger. And here is a whole blog, dedicated to Kali Linux, for the more advanced users.

How to start learning Linux.

And after we concluded, that learning Linux is important, there comes the big question – how do we start? Well, starting is easy, just download it install it on a VM or as a live CD and give it a try. And here I hear people screaming, “but how, it’s complicated, it’s hard to use, you have to know the commands, only admins can use it…”. This is in the past, Linux nowadays has nice user interface, easy installer (Next, Next, Ok, Finish) and many other Windows like features. The thing is, sooner you forget about them – the better. Because when you have to use Linux to log into a server, there won’t be a GUI, so get used to the console, it’s nice and easy, just a bit weird in the beginning.

So, what distribution should you pick? Well, there are thousands of them, but here are my suggestions:

  • Easy to install and use – junior level – Ubuntu, Mint, CentOS – these are really user-friendly and meant to be easy to use.
  • A bit more complex – intermediate level – Fedora, Debian – these are, let’s say, the above’s big brothers, it might be a better idea to dive into them if you have previous Linux experience.
  • The elders – senior level – Arch, Slackware – these are intentionally hard to use, at least to a novice user. The reason is, they want to keep things as Unix-like as possible and as simplistic as possible. You have a lot to learn from them, because you have to do almost everything manually – disk partitioning, configuring interfaces, installing GUI, if you want one, etc. Definitely a must see, but you need to arm yourself with a lot of patience and knowledge.

Here are some resources for the novice Linux-ers:

In this blog topic I wrote about‘s course, I’ve finished it, it’s awesome and a great beginning for total noobs in Linux.

If you like a more visual tutorials – Eli The Computer Guy is your man. This guy is amazing, really good at explaining stuff in a simplistic and understandable manner, obviously knowledgeable in various OSs and networking, protocols, programming and I don’t know what more, but he has a whole section on Linux, with nice tutorials.

Nixcraft, is a great blog, where almost every problem you search for has solution, with really cool examples and explanations.

Of course, there’s a lot of paid courses and programs to learn and certify about Linux, but I am just trying to give an introduction including free and accessible materials here.

So, don’t forget to comment and share with your friends… and don’t forget to use Linux – open source, means an open mind. Good luck! 🙂


How to extract links from XML file using Linux grep

Recently I had offered a colleague of mine to provide her with all links to blogs I follow so she could read them, too. It was easier said than done. Needles to say, is very friendly in this case and provides you with an option to export all blogs you follow. You simply have to go to “Reader” > “Blogs I follow”, click “Edit” and then “Export” which appears under the text input field.

So good, so far. I was provided with an .opml file (an xml looking like mark-up) and here is where my misery began. It’s good to know, that currently I am following about 150 blogs and extract all manually would have been a punishment I don’t want 🙂  and providing an ugly .xml file to my co-worker would have been even uglier.


I’ve done such exercises when learning C# by using regular expressions and I believe I could even do it in C, but dealing with stream readers and writers wasn’t as efficient, as I wanted. And one thing popped into my mind.

Regular expressions + Linux.

Regular expressions are wide-spread in almost any programming language, but it’s not a surprise they could be used in the Linux shell, too. So with only one command, we could do the same job as open file (with some stream), read it, write to another, and then close it in any programming language. The sweetest part here is, we don’t have to care about it, it’s all done by grep. But let’s give a some context to our task.

The markup.

This was tricky. I wont paste the actual .xml because I don’t want to have these blogs spammed, but it looked just like this:

So as we see, we are actually having a lot of information there and we have the links repeated twice, so we’ll have to keep this in mind, too.

Using grep.

Grep is command line utility that searches for a specific phrase or pattern in a text file and displays the results in another file, or through the standard output, if no file is provided.
Normally the command looks like this:

To be honest there’s no easy way to learn regex, you eighter struggle with them for long enough until you learn them, or you just apply the try-fail rountine until you get. I don’t pretend I am e pro, I am mostly using the second approach. This is the reason why I wont explain details everything you see, because there’s enough tutorials for using regex, I will just focus on our specific task.

Our command would look like this.

So before we are able to proudly say:

neo - I know regex

we must explain what we did with the command.

Ok, so the easy part is we are using grep as a command line utility and we are providing it with source file(wpcom-subscriptions.opml), and at the end we redirect the standard output to a file that will be created for us (> sites.txt). What’s left are some additional arguments that we provide (-oE) where -E stands for “extended regular expressions”, this way we could use more advanced features such as grouping, and -o which stands for “only matching”, which will redirect only the matching (otherways grep returns the whole lines, which is useless in our case).

So the only part that’s left is the regular expression itself. We start with matching the protocol “http”, than we have the symbol representing any character sign (\w) .But there’s another “hack” here, since it’s not sure if we are expecting a letter (ex.https) or a non-letter symbol (http.) we are using the pipe symbol to define one option or another. Since we really don’t care what’s coming next, we use ‘+’ symbol which will literally select every occasion defined from the previous condition, until it hit the end of the row, whitespace character (\s, \t) or the next codition.

The last part of deciphering our regular expression is the part we deal with the top-level domains of the URLs(.com, .net, etc).But first we are using the backslash (\) to escape the dot.Why is this important? The dot has it’s own meaning in regex and it’s “select anything”, that’s why we have to use it carefully, other wise we might select stuff we don’t want. Next we are using again the same paradigm – we divide them in a group, using the parenthesis, inside we list the top-level domains, we believe are present in our list, and divide them using a pipe. The last part is just cosmetic, if we take a look in the mark up again, we’ ll see that some of the URLs are ending with ‘ ” ‘ and some ending with ‘ \” ‘. We use the same pattern and switch these two options with a pipe. The last symbol – ‘\s’ represents whitespace, because our target is the link located in the section “htmlUrl” and it ends with a whitespace. If we don’t match the pattern ending with space we’ ll just end up extracting every link twice. Which we don’t want, but indeed it’s a “corner case”.


It’s always exciting when you are able to use your coding/scripting skills to tackle a boring task and I hope this will be helpful. If you ever need to use regular expressions, doesn’t matter if in Linux or any programming language, and if you feel insecure on what to use in your expression, I would recommend you to use this tool. It is helpful because it provides a precise definition of your regular expression, step by step, this way you will be able to see if there some part that’s ambiguous or not precise enough.
I would love to read you opinion on this and of course if you liked the topic, feel free to comment and share with your friends.

Linux foundation starts new free course along with EDX

Linux foundation and EDX are releasing the course called “Introduction to Linux”. images

The course was paid until now and this is it’s first start as a part of the EDX and it will be totally free.

According to the description the course will start in the third quater of the year and will cover basic knowledge over the three main OS Linux families.

The goals of the course will be:

  • Indtroduction to Linux’s three biggest OS families.
  • Good knowledge and ability to interact with Linux from both graphical and command line perspective.
  • Various techniques commonly used by Linux devs and admins.

The course will be mentored by Jerry Cooperstein – a PhD working in Linux since 1994, developing and delivering training in both kernel and user space.

If you would like to register for free, you could do it here: Introduction to Linux

What’s Linux:

Linux is light weight open source operating system, commonly used in big scientific projects, servers and complicated software architectures. Linux is in the hearth of one of the most popular mobile operating systems – Android.
I have a lot of reasons to beleive that Linux will reach even further as one of the main competitors in the game industry since the release of the Steam machine and Steam OS – a fork of the Debian distribution.

Good luck guys and thank you for the great intiative.

Today we fight back against mass surveilance


Today (February 11,2014) is an international day to show your protest against surveillance on internet by intelligence agencies like NSA and their acts to monitor and control our internet privacy.  As an open source enthusiast and as a professional involved in the IT area, I strongly believe, that technology should be used as an extension of human mind and free will, to help it reach further, not to restrict it. Surveillance in internet is an act of violence against the human rights, free will, the freedom of speech and against you and your own private data.

The site to show your support to the cause and your protest against mass surveillance is here:

How could you help?

Just spread the word, show that you care. Write a blog post, share the information to your friends on the social networks, tweet about it, or anyway you feel comfortable with, just make the world know that you have an opinion, be creative.

To support the cause I would like to share this short video from Ted, by Mikko Hypponen, concerning intelligence agencies, and the mass surveillance that they perform on every one of us and how far did they went. Hope this is helpful.

Thanks for your time.

Ten good reasons to use Linux OS

It was 2 years ago…

It was 2 years ago when I first started a Linux OS on my PC and from that moment I started to evolve as a Linux die-hard fan with every single moment. As I recall it was some version of wubi, which is a Linux installer for Windows that lets you install Linux over your Windows file system, so you don’t have to install it as a dual boot. So since than I’ve changed 2 or 3 distributions and re-installed some of them 4 or 5 times (Linux is not dummy proof, so the most common way to learn when you don’t know what you do is – break it, fix it). So I decided to list 10 of the most important things that kept me using Linux through these 2 years, and will keep me using it for much longer.

1. Freedom – you are allowed to do whatever you want.wallaceTux

Isn’t it all about freedom. We are digital generation, we adopted new technologies before we could even figure them out, so this brought so many troubles to our heads. One thing we desire most, is to shape our own digital world where we are our own masters, not any company. And Linux really does provide it to us, you are no longer obliged to connect any kind of account or provide any kind of personal information about you to some company. One of the things I liked most, are the updates, when you were a Windows user, you probably know how annoying was the constant bugging messages that new updates are received and you SHOULD install them other wise doomsday comes. And if you don’t do so, the “nice guys” from Microsoft will do a feature that will do it for you on shutdown. So congrats, now you have Bing bar in you browser and a 10000-th security update that still sux.  Which brings us to the next point…

2. Better security.

The best part, you install Linux and you don’t need an antivirus program. Why, well I will use a quote here from Wikipedia:

The vast majority of viruses (over 99%) target systems running Microsoft Windows

And it’s true too, without going too deep in technical details, let’s just say, that Linux’s system files and folders, all the secure information has a specific access level that can not be gained by any 3-rd party software. Another reason is, it’s not that easy to run an executable file (.exe)  that most viruses use as a carrier, on a Linux operating system.

3. Better options for customization.

Everything in Linux is customizable. From the wallpaper to the cursor icons, to the way windows look, fonts, panels arangement,  everything. And the even cooler part is, you can customize the behavior of you operating system. It is in the “very advanced” section, of course, but it is possible.

4. Performance, performance, performance

I believe this is the section where Linux kicks Windows’ ass reaalllyyy bad. Linux boots, reboots, shuts down and operates times faster than Windows. This is the reason why Linux is still number one in the servers, growing faster in mobile devices OSs where performance is a must, and that’s the reason why scientific giants in NASA and CERN are using Linux.

5. Better support for open source software.

As we stated in the beginning, we are living in a free world and we like to be masters of our own destiny. One of the ways to do so, is by using open source software. It won’t be a surprise of course if we mention that proprietary operating systems are not very friendly for most of the open source software that’s out there. Well of course a good reason for that is, no one who actually develops an open source  software would actually care about proprietary OSs support. Good example for this is if you try to compile an open source flash player from source code on Windows. Linux, on the other hand is free and open source by design. All the software that you gain access to is free and open source.

6. Better usage of hardwarelinux_logo

It is well known that since the performance of your machine is better with Linux, there’s something going on under the hood, that uses your hardware in more meaningful manner. You will be surprised to find out that your old hardware could run perfectly with most of the modern Linux distributions. Why don’t you try running Windows 8 🙂
Here are some exact numbers taken from the pages of Ubuntu, and Microsoft. We will be comparing minimum requirements of Ubuntu 12.10 (last version of Ubuntu) and Windows 8.1 (last released version of Windows):

Ubuntu 12.10

Ubuntu Desktop Edition
700 MHz processor (about Intel Celeron or better)
512 MiB RAM (system memory)
5 GB of hard-drive space (or USB stick, memory card or external drive but see LiveCD for an alternative approach)
VGA capable of 1024×768 screen resolution
Either a CD/DVD drive or a USB port for the installer media
Internet access is helpful

Windows 8.1

Windows 8.1
If you want to run Windows 8.1 on your PC, here’s what it takes:

Processor: 1 gigahertz (GHz) or faster with support for PAE, NX, and SSE2 (more info)

RAM: 1 gigabyte (GB) (32-bit) or 2 GB (64-bit)

Hard disk space: 16 GB (32-bit) or 20 GB (64-bit)

Graphics card: Microsoft DirectX 9 graphics device with WDDM driver

I think facts speak for themselves.

7. Faster installation and easier maintenance.

This is one of the points I love most, the installation. If you ever installed a Windows OS, you probably spent a day doing it. Because you will need about a 30 to 45 mins to install it, but after that hell comes on earth. Yes, you have a brand new operating system, but nothing else, no chat programs (well I thinк they have Skype, since Win 8.1) , no multimedia, no browser other than IE, no office software etc. To install, register and update all the above,  you would need about a day, and I really mean it, because Windows updates take forever.

Let’s see how it goes on the Linux side, installation is about 20 mins, and guess what – it’s all there, everything you need, or at least all the basic stuff to start with. Because almost every Linux distribution has various chat clients, media players, torrent client, open office, several browsers, and many, many useful applications. And if you need more, you can reach it with importing additional repositories or through a graphic interface called software center or something similar. It’s not up to date, of course but an update would take 10 to 15 minutes max. How about that ?

8. User friendly.

There is a myth to be buried here, and it’s a belief that Linux is a “geek only” operating system, that you have to be a software developer or admin to use it. That you have to write some magic commands in the console in order to have a normal workflow in your machine. Yes, this was true, but for 10 years ago. And there’s still few distributions of Linux that keep it simple and old school, but they are definitely not the most common ones and I don’t think a new user would try them. Almost all others distributions have user friendly interface, almost everything is maintainable through UI, but if you are advanced user you can always use the console. And that’s the beauty in it, it’s flexible and fits your demands, no matter of your experience as a PC user.

9. You don’t have to pay anything.

No trials, no registration, CD keys, cracks and stuff, everything is free and meant to be free, and not only, it is a policy of sharing and mutual support among Linux users, and as a Linux user you will start to feel exactly the same way.

10. Huge variety of distributions spins.

There’s thousands of Linux distributions out there, each having several spins – a different set of applications and graphic interface, to perfectly fit everyone’s taste. You can try one, and if you don’t like it you can try another. There’s actually logic behind every distribution, reasons what group is targeted with it, or why something is made different here. In other words, Linux cares about who we are, and what we do and shapes the development to serve us, not the software’s manufacturer.

Well there’s cons, too …

I know, I know, I know … the Linux haters will be already screaming and in order to be objective I have to point out these, too, but since this post is not about the disadvantages of using Linux, I will just list them, so here they are:

  • Poor graphics driver support – this is a problem since too long, and it still does damage, most of the proprietary drivers for Linux are done lazy from their manufacturers, they lack documentation and the only alternative is a open source – reverse engineered drivers. This leads us to our next issue …
  • Poor gaming support – since the graphics are f**ked up, it is normal to figure out no gaming will be done soon here. Yes, there’s a really good tries to bring gaming in Linux by steam, and Wine but it still a pain in the a** to run a game in Linux.  Plus, let’s face it, gaming is totally not open source. Hopefully this will change pretty soon with the rise of the Steam machine and Steam OS.
  • Really poor support for profession specific software – this has to do with the compatibility of proprietary software on  open source OS but anyway, if you are a designer willing to use Photoshop on Linux, you are screwed, same with AutoCad, 3D studio Max and many more. Yes, of course these programs have alternative, but learning to use an alternative doesn’t always fit your work schedule, plus they very often lack certain features.
  • Extended learning curve – using open source software is challenging but learning to use it in a proper way comes with a prize, you simply have to read a lot.
  • Worse support – Not valid in all cases but quite normal, if you pay in the general case you have 24/7 support. In the open source world the support is provided by the community. Sometimes, specific products are developed by one guy and in some moment he just quits. This is part of the game.


I don’t say you should use Linux and I don’t say you shouldn’t. My opinion is – give it a try. You will be really amazed of what incredible stuff you will be able to do with it. It has a lot of good features and provides you the freedom that we all cherish. So what are you waiting for …

Lubuntu 13.10 reduce brightness issue

Since the last version of Ubuntu – 13.10 – “saucy salamander”, I’ve been experiencing several issues, apps stopped working, crashes etc. This is kind of normal stuff, with each new release of an open source distribution, there’s a period in which you have to be patient to have the communities release some patches of what got broken. To me I think the worst was screen brightness, I am still quite new to Ubuntu, I’ve been using Lubuntu for a month or so, but I had no luck with finding some GUI or terminal tool to reduce brightness using the open source amd/ati driver, and believe me it was burning my eyes, literally.

Occasionally my function keys were working and I was able to control the brightness, but not for too long, as I said it was on occasion. So, I had 2 choices – to sell my soul to the devil and install the proprietary driver, which by the way SUX, or find myself a solution.
I started looking for a solution and I can’t say it was easy. That’s why I decided to share the solution here, and save some poor user from buying new glasses.

The solution for having your screen brightness reduced, so you don’t get sun burn was pretty simple to me, hope it works for anyone who needs it.
Here what you have to do: open terminal (Ctrl + Alt +T) and execute the following command:

nano /etc/rc.local

The important part here is: nano is a text editor, we use it to edit the file, and /etc/ rc.local is a file specific for all Debian based distributions which runs at the end of all multi-user boot levels, which makes it pretty easy to put stuff in it. By opening the file we have the following output:

#!/bin/sh -e
# rc.local
# This script is executed at the end of each multiuser runlevel.
# Make sure that the script will "exit 0" on success or any other
# value on error.
# In order to enable or disable this script just change the execution
# bits.
# By default this script does nothing.
exit 0

As it says at the end, by the way – it does nothing.
First we need to comment the “exit0” statement and add the magic line:

echo * > /sys/class/backlight/acpi_video0/brightness

Where * is a number from 0 to 10, standing for the level of brightness you desire, in my case 8. After the changes the file should look like this:

#!/bin/sh -e
# rc.local
# This script is executed at the end of each multiuser runlevel.
# Make sure that the script will "exit 0" on success or any other
# value on error.
# In order to enable or disable this script just change the execution
# bits.
# By default this script does nothing.
#exit 0
echo 8 > /sys/class/backlight/acpi_video0/brightness

Save it, reboot your system and voila!
Hope that will help anyone, good luck 😉

Creating RAM disc – Linux

Since it happened to me to have this task as my homework assignment in the Telerik’s software academy Linux administration course, I decided to have this valuable knowledge shared here.

For starters lets first define RAM disc: basically we might refer to this as a RAM memory turned into a storage device instead  of random access memory.

What are the pros and cons: If we have to point the good sides of RAM memory when used as storage device there’s a point that always matters: Performance. Yes, RAM is the fastest performing memory if you need to read fast and write fast from your discs, and by fast I mean way faster than any HDD configuration, even SSD.

Here’s a small figure showing some numbers:


Obviously results speak for themselves.

Bad sides: Well as we all know life’s unfair and every performance gain we have, comes on a high price, including RAM discs, too. Being a volatile memory, RAM will be completely erased and all your data – lost, if your computer powers off. So be careful keep that in mind and be prepared to store all your valuable data on your hard drives if you have plans to power off.

Here’s some quick steps to create a RAM disc on Linux.

First you need a spare RAM, that’s for sure and as you can guess if are going to make the RAM disc bigger than 4 GB you will need 64 bit OS.  You can easily check the amount of free memory with the ‘free’ command with parameters ‘-m’ for megabytes or ‘-h’ for human readable format.

Lets say we want to create a 2 GB RAM disk. We first witch to Root:

[code language=”bash”]
$ su
# mkdir /tmp/ramdisk; chmod 777 /tmp/ramdisk
# mount -t tmpfs -o size=2048M tmpfs /tmp/ramdisk/


2048 here stands for the size of RAM we want to relocate as RAM disc. Needless to say, the amount of RAM should be less than your free RAM, otherwise you will include swap memory, and that will be the end of your performance.

As a conclusion we might say using a RAM disc is really a good choice when it comes to performance and low cost but we need to be careful about what data we put in there if our machine is not powered on 24/7.