Network Namespace without Docker

Let’s imagine the following use case:

  • I am connected to several networks (wlan0, eth0, usb0).
  • I want to choose which network I’m gonna use when I launch apps.
  • My app doesn’t allow me to choose a specific interface, it’s delegated to the OS that chooses the default one.

I could of course use Docker, which isolates networks, however Docker also isolates a lot of other things, needs images and is not really fit to launch existing apps on your computer.

We are going to use the same mechanism, network namespacing, but manually.

Let’s start by creating a network namespace named 4g:

Now we link an existing interface to it (we can use a virtual interface for complex setup but we’ll showcase it with the command line here).

Mine is named enp0s20u2:

Once it’s done, the interface is not visible from the default namespace, let’s check it with:

Now that I hava configured the interface, I need to bind it to my 4g namespace. Either prefixing each of your commands or open a shell :

or

From now on it’s working but we don’t have a DNS.

DNS are usually set in /etc/resolv.conf and the namespace functionality offers a mapping system (default) /etc/netns/<ns>/resolv.conf -> (<ns>) /etc/resolv.conf

So let’s edit the file:

Now the namespace is fully functional. We can launch firefox for example :

Firefox is launched as root, which is not great. To fix it, use sudo :

Voila!

By |2018-06-05T22:37:08+00:00July 6th, 2016|Categories: Blog, Hack|Tags: |0 Comments

About the Author:

Passionate about computer science since his childhood, and practicing programming in leisure since adolescence, Pierre joined an engineering school specializing in Information System, Big Data option. He began his career in the IoT research laboratory, where he was able to study distributed systems, both theoretically and practically. Pierre then joined Adaltas. Today he is a Big Data & Hadoop Solution Architect and Data Engineer with over 4 years of hands-on experience in Hadoop and 5 years of experience with distributed systems. He has been designing, developing and maintaining, data processing workflows and real-time services as well as bringing to clients a unified and consistent vision on data management and workflows across their different data sources and business requirements. He steps in at all levels of the Data platforms, from planning, design and architecture to clusters deployment, administration, maintenance as well as prototyping and applications development in collaboration with business users, analysts, data scientists, engineering and operational teams. He also has a good experience as educator for knowledge transfer and training.(He regularly gives courses and training around Big Data for various engineering and master schools) facilitating the transfer of knowledge and training of teams.

Leave A Comment