Nicolas314

All my geeky stuff ends up here. Mostly Unix-related

Home Automation

leave a comment »

plug

I was a bit late to the party, but I finally got into home automation and bought a couple of connected power plugs. Price and curiosity decided for me: they can often be found online for really cheap and they are a great excuse to spend a week-end doing nothing else but play with new gadgets.

The plugs I got are from a noname Chinese company. The packaging is raw cardboard with no indications, and the manual is limited to a thin and small sheet of paper with a QR code that sends you to the app store to download an app called SmartLife. So be it.

Start the app and get faced with a login screen that forces you to create an account and register your product. Pain in the butt. Once logged in, you need to identify the kind of device you want to register. That’s a bit surprising. You would expect connected devices to be able to exchange their pedigrees over the net instead of me having to shuffle through thousands of possible products and probably getting it wrong. Alright, moving on.

Next step is device registration: put the app in pairing mode, press the only button on the plug long enough, and magic: your connected power plug now appears inside the app. Great!

Let’s spice it up a bit: the plugs are advertised as being Google-compatible, so start the Google home assistant and try the additional linking process. The Google menu is scary: there are literally hundreds of IoT device vendors out there, and I have no idea which one I should be looking for. No brand name on the box or on the plugs, and the product name displayed on Amazon does not appear anywhere in the endless list proposed by the Google app. Hmm… Let’s try with SmartLife? Yes: Google knows about SmartLife and asks for full control over the account I just created above in the app. Why not, who cares? Be my guest.

Once this was done, the two plugs appeared magically under Google Home and I could finally say “Hey Google, turn the lamps on!” and get enlightened. Phew!

That’s fun but I have no intention of leaving a device that permanently records every sound in my house, so voice command is pretty much out of the picture. So without voice command, let me summarize how to switch the lights on in my living room in 8 easy steps:

  • Since the app is registered with a SmartLife and Google account, I only installed it on one device. Which means I am the only one who has the power to switch lights on and off. Which also means that I am the only one who has to rummage through the house to find where I left my bloody smartphone again.
  • Unlock phone with fingerprint. Try again. Nope, try again. Give up and type the password. Avoid the temptation to read the bazillion notifications.
  • Find the app. How’s it called again? Go through the endless list of apps, cursing yourself for not putting in on the home screen with the other 255 must-have apps.
  • Open the app, but since you haven’t used it for a while you need to login again. Of course you forgot the password, so click “Forgot password” and get a link by email to reset it within 5 minutes. Hopefully. Now is the right time to sort out those bazillion notifications.
  • Now you got in, making a mental note to write that password down somewhere on a piece of paper and also remember where you left that piece of paper. Select the Devices tab, go to the lamp you want to switch on, and click.
  • Nothing happens, so click again. And again, several times. Press even harder on the phone screen, you never know.
  • Now all your clicking finally gets through and the lamp starts switching on and off like a Christmas tree, ending up switched off, of course.
  • Click again, then wait. Tadaaa, the light is on!

Easy, right?

The next day I looked into advanced options and found a solution: use a timer. Instead of doing it myself, automate it, of course! Program the lamp to switch on when it’s getting dark, and switch off at midnight. Problem solved.

Or maybe not.

Let’s assume I am sitting in my living room, the lamp is 3 feet away, but just like a real lazy bastard I reach for my phone, open the app, and press the button. What happens next?

My phone connects to the SmartLife server. Could be in China, maybe in the USA, who knows?

I get authenticated. My order gets through, and a flag is raised somewhere on somebody else’s servers, probably in the US on Amazon.

The plug in my living-room sends beacon packets to its home server every second, asking if it needs to do something. The raised flag finally gets to the plug, which acts and switches the light on.

How many things could go wrong?

My phone could stop working. Empty battery, bug in the app, bug in the OS, or it could just be the perfect moment to run a firware update that freezes the device for 20 minutes.

My phone may not be able to connect to the cloud. This requires an overseas link, which is of course not guaranteed to work. Same problem with the plug requesting commands every few seconds. Servers crash, databases get erased, latency can increase without warning. Seems stupid that a server crash in China would prevent me from turning the lamps on in my living-room. I do not want to subscribe to Amazon’s server health status to figure out whether I will be in the dark tonight at home.

My home network is nowhere near industrial availability. Between the internet modem and the plugs are several switches, a router, and a WiFi access point that sometimes decides to take a few hours off for relaxation.

Depending on a bunch of companies to keep my service alive does not sound very future-proof. What happens when some of those companies go out of business? What is the plan then? Will we throw away all the devices that depend on it for a living? I was looking for a network-controlled device, not for a life-time service contract with a third-party company whose name didn’t even appear on the box.

In terms of energy consumption this is not environmentally sound. The plugs keep sending small data packets back home, which is not much in terms of bandwidth but forces every device on the way to grind packets for no reason. An overseas link necessarily involves 15-30 hops one way, that’s a lot of machines working to not light up my living-room.

I also don’t feel too good about needing to create an account to protect the plugs from outside attacks. This is just opening a welcome gate for hackers to get into my network, and the price for closing that gate is to forego any usage of the plugs. Not good.

 

Fortunately there are solutions. Most connected plugs today are based on the ESP8266, a very popular chip in the maker crowd. Just google a bit and you find methods to erase and replace the firmware for those plugs with something that does not need to call home every second.

The project I used is tuya convert:

https://github.com/ct-Open-Source/tuya-convert

The procedure is rather simple. I used an Ubuntu-based laptop and got everything sorted in just 15 minutes. No need to open the box or solder anything, it all happens over the air. I re-flashed the firmware with Tasmota, an alternative ESP8266 piece of firmware, done. The two plugs in my house are now mini web servers that can be automated using MQTT or simply controlled from a web browser. Most importantly: they do not require an Internet connection to work and they do not flood the local network with broadcast messages every second.

https://github.com/arendst/Tasmota

I lost only one thing: the ability to switch the lights from outside my house. Can’t say I will miss much.

This got me thinking: seems most IoT connected devices today are designed to be connected to the cloud. It is extremely hard to find something that could just be controlled from your local network without need to ping back to China or the US or who-knows-where. Why? I understand these companies are hungry for user data, but what’s in it for me? Requiring an active Internet connection should not be a pre-requisite for switching lamps on!

One small note about security: those devices are advertised as being top-notch secure and based on military-grade security. And yet you can simply erase and re-flash their firmware over the air without much effort. Simple but deadly mistake: devices do not validate server-side certificates. It is rather trivial to start your own web server with a self-signed certificate and point the device to it using a well-configured local DNS server. This is great because it means I could get rid of the initial firmware, but it also means anyone in the same room could do the same and poison the firmware. Who knows how many other security issues these devices suffer from?

I can easily imagine that instead of relying on distant machines located in data centers, we could host the main control point ourselves in our houses. Modern houses already have Ethernet cabling running through all rooms, with some space in a closet to connect a router and a switch. It would not be hard to install an additional server there, something Raspberry-pi-sized, and use it to control every piece of connected hardware in the house.

You guessed it: there are already several projects on that topic. The most promising I found seems to be Home Assistant:

https://www.home-assistant.io/

I have not looked too much into it yet but will do soon, and maybe write it up here. Next evolution would be to be able to run voice recognition on that same local machine so we don’t have to be spied upon by Google and Amazon. I am really looking forward to the next company offering a Home OS. Right now these home automation gadgets are just a jungle of services quickly strung together with no future.

Today’s lesson: don’t buy connected devices if you cannot flash them with alternative firmware.

Written by nicolas314

Wednesday 8 January 2020 at 9:46 pm

Japan Kaleidoscope

with 2 comments

cover

Akihabara, Tokyo

I have just spent three weeks in Japan. What can you decently bring back from such a short trip? I have the feeling I have only touched this extremely rich and multi-millenial culture. A kaleidoscope of colours, smells, sounds, and faces come to mind when I try to look back on this small adventure. As usual, when I come back from Asia I discover that I have taken home a little more than I bargained for. Some things are changed, some things are broken inside me, only to be re-built from their ashes.

During my first visit to Asia twenty five years ago, I got hit in the face with fairly obvious cultural shocks. Things like counting on your fingers: I wanted to tell the hotel staff we were three guests and raised my thumb, index, and middle finger to indicate: THREE. The guy repeated with his hand, looking surprised. He said something in Chinese (this was Taipei), I said something in English, and we both understood we would be left with gestures for the rest of the conversation. I waved again with my fingers: THREE, so he repeated it, looking surprised. We were three guys standing in the lobby so I didn’t get it. I showed the two others: ONE, TWO, and myself: THREE. The guy opened his whole hand, showing all fingers, asking something. No we are not five, we are three. It took us a little while to proceed, because what I had just gestured means EIGHT. The hotel clerk was asking me where the other five were. In order to show three, you raise the three middle fingers. Ah, right. I later learned how to count in Chinese, an easy task: with just thirteen words you can count up to 999,999.

There were more embarrassing moments, like the first time I sat on a Korean toilet and could not figure out how it worked since I am quite illiterate in Korean. You may not want to hit random buttons and wait to see what happens, unless you want to come out of the loo covered in smelly water just before an important customer meeting.

Japan has those same magic seats that clean your ass like you are a baby. Once you figured out the controls, you wonder why those are not generalized in the Western world already. There is no better feeling than walking around with a clean butt, especially in those hot summers with 40 degrees and 100% humidity.

My major cultural shock during that last trip was food. If you have never tasted food in Japan, you have never eaten decent food in your life. Ever. The closest you can get to that in terms of taste, freshness, and diversity could be Italy, on a very different spectrum. Japanese food does not just build on fish and seafood, this is just one tiny part of it. The default 5-euro bento they bring to you in very standard Tokyo restaurants could contain fish and meat of all kinds, but also sauces, spices, vegetables, rice, soy, salad. Some parts are raw, some are deep-fried, some barely cooked. I never knew tofu could be prepared to be so delicious. Rice is always perfect. If you never tried, you owe it to yourself to try okonomiyaki and takoyaki. We found a restaurant in Osaka that serves takoyaki for a bit less than 2 euros for 6 pieces, when 12 pieces will fill you up. A ramen bowl served with half-boiled egg and vegetables can serve as a single meal for a day. Soups are largely different from one place to another, always delicious.

food1

Soup, rice, vegetables, tofu, and fried pork

Finding a place to eat in Japan is usually not an issue. Everywhere you look there are between three and thirty restaurants facing you. We tended to stick to the ones offering menus with pictures, where we had a chance to show what we wanted and have an idea of how much we were going to pay. Some restaurants have an vending machine where you insert a note and press one of the many buttons to choose your dish. A ticket gets printed, which you hand over to the cook who will prepare it for you. Only trouble is: all buttons are Japanese-only, so we usually went back and forth between the window outside where plastic replicas displayed the available food, noting the price down. If the dish of your choice was the only one costing 530 yens, you knew what you were having.

Even if Tokyo is a gigantic city, it does not have that oppressing feeling you can have inside Paris, for example. Residential areas are packed with small houses that have at most two floors, letting you see the sky all around you. In business districts, skyscrapers are sufficiently far from each other that you always see the sky, giving you the impression you can breathe. Of course there are exceptions. Train stations and the quarters around them tend to be real mazes of streets and hidden passages you can wander aimlessly for hours on end without ever seeing the light of day.

shibuya1

Shibuya crossing

We visited Shinjuku, a busy business district with office skyscrapers all around, filled with salarymen. Around 8pm, we saw millions of them walking towards the subway. Salarymen and women are equally dressed with deep blue pants or a skirt, and a white shirt, wearing tired black shoes. When we reached Shinjuku station we found a store selling salarymen uniforms: endless lanes of deep blue pants and white shirts, with thousands of people all dressed the same, looking carefully around to find their next deep blue pants and white shirt. It looked so much like a cartoon. When in the subway, they patiently queue where indicated, politely waiting for passengers to come out of the cars before they rush in. Some of those cars were so packed it was suffocating, and the subway usually runs every minute or so. Who would drive a car in such a big city anyway when there are trains to take you anywhere for pennies?

shinjuku1

Shinjuku station

You might have heard of the cat cafés in Tokyo: yes they exist, together with rabbit cafés and owl cafés, where you can have your tea and pet a furry for an hourly fee. In Akihabara we saw those 8-floor buildings filled with arcade games producing deafening noises, endless streams of fighting heroes, car races, and cute little animals singing annoying songs at full volume. We also visited manga cafés, manga buildings, figurine shops, and duty-free electronic stores. We also got into sex-shops, more by accident than anything. From the outside all you see are bright colors and posters of female anime characters. Once inside all doubt is gone: the mangas are not the kind you expected, and the large collection of DVDs is probably not something you want to see because you won’t be able to unsee it ever again.

shop1

May not look like one, but this is a sex-shop

The streets of the electric city are filled with gigantic screens displaying ads, shouting slogans to the crowd as they pass by. There is no escaping those jingles and loud voices trying to sell you stuff, making Blade Runner feel like a documentary. At night, the lights are just magnificent, the information flood fills you with a sense of panic which makes me happy I cannot read. I wondered how we got through Shibuya without inducing an epileptic seizure.

akihabara2

Akihabara, Tokyo

In Takeshita Dori, we were surrounded by school girls looking for kawaii (cute) things to buy. Shops are filled with pretty much every conceivable product covered with images of little large-eyed furry animals staring at you like Bambi. The street is drowned in a thousand child songs playing too fast, laughing girls, brightly-coloured clothes and people yelling about their stores. I wish I could see that on a quiet winter evening.

 

osaka

Department store in Kyoto

You always hear about the Japanese culture and how it wonderfully mixes tradition and modernity, but you really understand what it means when you find kimono-wearing twenty-something girls looking for the latest cameras at Yodobashi, or when you see a tiny shrine on a street filled with bright neon lights displaying manga characters. The working men and women of Japan, all wearing uniforms, are facing a wave of creativity that has little equivalent anywhere else in the world. You see a subway car packed with tired salarymen who only want to get home, surrounded by cheeky colourful adverts inviting them to take better care of their skin or why not travel to a distant country. Cartoon characters showing their butts, movie stars dressed as animals, and dancing pokemon figures are just expressions of pure creativity. Stark contrast.

shop2

Pocket monsters

Three weeks and a million anecdotes later, I realize some things have changed in me and I don’t quite know what yet. Taking away some of the conventions I have unknowlingly applied all of my life and adding some more, gets me a little closer to understand what being human and living in society really means. Three weeks of being illiterate is a humbling experience.

Written by nicolas314

Saturday 1 September 2018 at 11:31 pm

Posted in japan

Tagged with , ,

Put on your shoes

leave a comment »

shoes


– Mister engineer, we are about to leave the house. Could you please lace your shoes?

– I’m afraid I can’t do that before at least next year.

– What? No! We are leaving the house right now. Tie your shoes and let’s go!

– Well, it is obvious you have not been in the shoe-lacing business for quite a while mate. See: in order to tie my shoes I’d have to get my hands closer to my feet. I see three main possibilities:

1. I lower myself down to the level of my feet (and shoes), which is dangerously close to the ground. I could trip and fall, bringing me to ground level with sufficient speed to hurt my nose, probably causing bleeding in the process. Who would want to leave blood on the floor? You don’t want me to hurt myself, do you? This would take us to a large amount of blood cleaning and nose healing, which could take a lot of time and make us both look bad in case someone on the street asks why I have a bloody nose.

2. I could bring the shoes up to my level. Considering my feet would stop touching the ground, I would have very little time to complete the movement needed to effectively tie a knot to what could be considered decent shoe-lacing. Bad knots would make us look bad, and we do not want someone to notice that we are not even able to come out on the street with properly tied shoes.

3. The third and last possibility is to wait for my feet to grow up enough so that my shoes do not fit any more. This would probably trigger some shoe-buying and shoe-replacing, which could then be put to practical use to purchase a new pair of lace-free shoes, which would then solve all the above issues once and for all.

My conclusion is that we should wait until my feet have grown enough. See you in a couple of months.

– Man, you have reached the end of my patience. Let me tie those shoes for you.

– I’m afraid I can’t let you do that, Dave. Your role as a caretaker is not to take responsibilities and do things in my stead, but to teach me to be autonomous and let me do that myself. In addition, may I let you know that I have had these shoes for a few months now and you have never laced them before in your entire life, therefore I am the only suitable person to achieve that.

– C’mere, let me do it.

– Are you questioning my authority with respect to my own shoes? When you bought them you said they were mine!

– They are still yours, let me just lace them.

– You did not understand the above mentioned points. Apologies for my poor choice of words, I always forget that English is not your native language and you may not get the full power of the most subtle nuances.

– Don’t patronize me. Just don’t.

– Oh that was never my intention. In order to patronize someone…

– WILL YOU FUCKING TIE YOUR SHOES?

– Why the harsh language? Is that really needed? I have only given you the current status and all you can do is react strongly against me. I have not invented laces, nor did I decide to place my own hands at a different altitude than my own feet. I suggest you review our options and come to your senses before we do something we might regret.

– Do you see my hand? I swear it can fly and land on your face in no time.

– Let’s not be too hasty now. I would have to inform legal of your perceived intentions and will have to quote your language. Research indicates that people in your situation have very little chances of winning a legal fight that involves strong wording and physical violence.

– … You know what? You… You just stay here, Ok?

– That’s what I have been telling you all the time. Glad you finally came to your senses mate.

Written by nicolas314

Monday 9 July 2018 at 11:08 pm

What time is it?

leave a comment »

clock


– Hello Mr. Engineer, can you tell me what time it is?

– No I can’t.

– Why?

– Well then. You see, my watch is an electronic and mechanic device based on the oscillation of a quartz that imprints a periodic movement to a set of cogs, which are then de-multiplied to lower the base quartz frequency from 32,758 Hz to exactly 1 Hz, i.e. one beat per second.

– That’s very nice. And what time does your watch show now?

– I could tell you but it would not be useful. See, the quartz frequency is not exactly that power of two, it is itself oscillating with a larger period around that value, meaning that my watch can be ahead or behind by some amounts that are hard to measure, let alone predict.

– So it is inaccurate?

– Yes! You can never tell exactly the time with that kind of device.

– Ok… Seriously, what time is it?

– Not only are the watch mechanics imprecise, but they do not take relativistic effects into account.

– That so?

– Yep. Since Einstein we know time is nowhere absolute. When I put my arm up like this, time flows a little slower because of the Earth rotation, and if I put it down like this is goes a bit faster. Or is it the other way around? Anyway, my time reference is unlikely to be the same as yours since we are not moving around in sync.

– Listen, this is all very nice but that was not my question. Will you tell me the time it shows now and I will deal with the imprecision myself?

– No can’t do.

– Why is that?

– Even if you discard all relativistic effects and frequency drifts, the notion of time is not something universal on Earth.

– Care to explain?

– Time is only valid in a given time zone. Since the end of the 19th century we have split world regions according to time zones which keep changing at regular intervals based on political choices. In order to be able to tell you the time of day, I need to know a reference time in a given place and convert that depending on your position on the planet. We could use GMT, which stands for Greenwich Mean Time, but it is not even indicating the current time in Greenwich UK. I could then program a microservice that could give you the current date/time based on an estimated position from your IPv4 address, provided you are not too close to a time border. But then that assumes you have Internet access. Oh wait, do you have an iPhone or an Android?

– Er… Thanks mate. So let’s say we use the current time zone, Ok?

– Do you know if we apply Daylight Saving Time where you stand?

– How would I know? Yes, probably!

– Probably with what probability? Because we could weigh the answer depending on… Hey, where are you going?

– To lunch. I just remembered I wanted to ask you if it was time for lunch.

 

Written by nicolas314

Monday 9 July 2018 at 10:39 pm

Camels

leave a comment »

camels

I read somewhere in a math history book that numbers were actually invented to count camels. Someone wanted to send over a herd of camels to be sold on a market on the other side of the desert and they did not trust the camel escort. How would the receiving party know if some camels had not been stolen on the way? So they used a fairly simple principle: line up your camels, put one pebble in front of each. Gather the pebbles, put them in a small jar, burn the cork, hand it over to the escort.

On the receiving end, break the jar, put one pebble in front of each camel. You will know immediately if camels are missing.

This apparenly went on for a while, until someone figured out that instead of lining up pebbles and camels you could shorten the process by writing signs on the jar to indicate how many pebbles were inside. On the receiving end you just had to look at the signs and compare to what you saw. In case of doubt, break the jar and line up pebbles and camels. And then it was just a matter of time until somebody noticed you don’t need the pebbles and the jar. Just cook a clay tablet in an oven with a text indicating how many camels you are sending.

I have no idea if this story is true or not, but I like the way it stresses the breakthroughs that have happened. Going from a bijection pebbles/camels to a bijection in camels/signs was brilliant. I expect the first attempts were likely to just draw plain strokes on the jar, as many as there were camels in the herd. The next breakthrough was simplifying a whole bunch of strokes into a single sign, e.g. using a hand to signify the number 5. And the last one was to realize that the jar and pebbles were unneeded.

Another shift that amazes me to this day is how money actually works. The first currency tokens had actual value, they were made of metal you could melt and use if you so wanted. When the first bank notes were introduced, they switched from actual value to a potential: the note said that you could obtain real metal if you were to exchange that note in a bank.

We now live in a world where I can pay my lunch by waving a piece of plastic over a radio-equipped terminal connected to a bank. My plastic contains numbers that cannot be found on any other credit card, which are used to authenticate me. Now my bank makes a promise to pay my meal to the restaurant’s bank. No metal or paper changes hands.

Since a few years, things are shifting again. Instead of waving a credit card containing my unique account identification numbers, I can now use a mobile phone that contains a series of numbers that are only valid for myself, my account, for today, and for limited amounts. This is what they call tokenization and the reason it is booming is that it is a lot simpler to store temporary tokens with limited value than long-term banking credentials with unlimited powers. Security needs not be that high, though you still need to be able to authenticate account owners in a very secure way, but there are plenty of ways to achieve that.

Among the strongest methods we know today to authenticate someone, the most popular relies on the fact that you cannot split a big number into a multiplication of primes. If you tried with a gigantic computer, it would require more heat to power than is available in the universe.

We have come a long way since camel-counting.

Written by nicolas314

Wednesday 1 November 2017 at 11:37 pm

Posted in fun

Tagged with , ,

I am not your daughter

leave a comment »

sleepless

You called me quite late. Some time during the middle of the night, and for whatever reason I had left my phone on. I picked it up and was greeted by your anxious voice:
– Isabelle, is that you?
Part of my brain was still actively dreaming, but the part that was emerging found the idea preposterous. Do I sound like an Isabelle? I croaked:
– No Madam, this is not Isabelle.
– Oh come on Isabelle, it’s Mum. Stop playing fool with me, I recognize your voice, please..
Even half awoken, the tension in your voice was definitely noticeable. You wanted to talk to your daughter and nothing would stop you.
– Madam, I can assure you I am not your daughter. In fact I am a man and my name is Nicolas.
– Bullshit! Isabelle, talk to me!
Your old lady’s tone left me no choice but to obey, so I gave up and decided to play along.
– Alright Mum, you got me. What’s up?
You seemed surprised. Apparently Isabelle does not give up so easily when playing this game. But the sudden joy of being able to talk to your daughter was so great that you could not help it. You started talking about your neighbours at the retirement home, how the nurses were treating you, and had many complaints about the food and such. I listened very carefully at first and quickly dozed off, we were half way during the night after all.

You called again a bit later, and again, and again. We spent our night like this: you finally talking to your daughter, and me sleeping through 10-minute intervals. Finally one of your nurses must have found out you were secretly phoning at night and you stopped calling.

You never called again. I hope you found the right number for Isabelle and she takes good care of you.

Written by nicolas314

Monday 2 October 2017 at 9:42 am

Posted in Uncategorized

Long live NAT!

leave a comment »

ipv6-no-thanksHome networking can be a lot of fun: setting up a name service, a guest network, or traffic rules, leads to an endless joy of discovering new RFCs or creativity in the very active field of artistic configuration file syntax.

I thought I had seen everything until I tried to set up IPv6 connectivity for my home network. Little did I know that this would eat up so many of my precious free evenings. The following writeup is here to remind me never to try that kind of shit ever again, and as a warning to future generations who might want to dig into this kind of topic. Life is short, there are many better things to do than attempt to set up a new addressing scheme for your home network. Long live the NAT king!

The Start

It all began when I noticed that my ISP provided me with a unique (native!) IPv6 prefix to use on my home network. Something like:

2001:1234:5678:9abc::/56

Since I was not familiar with IPv6 addresses, it took me a while to find out that the first 64 bits of a 128-bit IPv6 address designate a network, and the last 64 are reserved to differentiate hosts on that network. My provider handing me a /56 means I have 64-56 = 8 bits to play with, i.e. I can instantiate 256 home networks, each having up to 2^64 = 18,446,744,073,709,551,616 hosts. Overshot a bit, maybe.

So where do I start? Do I have to install specific software? Where? Do I need to buy specific hardware? How many services are needed? And thereby started my long painful descent into the horrific world of IPv6. Toss and loose 1d20 sanity points immediately.

My ISP unfortunately did not provide any help as to what I am supposed to do with the IPv6 thingie they gave me. No single help page, very few discussions on their forums, and all exchanges I had with customer service were completely useless. Best I could find were discussions between customers of an ISP in the US that provides a similar setup. That is thin.

Say you received a /56 prefix from your ISP. If that prefix ever changes e.g. because you switched to a new ISP, you want things to work automagically because that is the way things currently work with IPv4: changing my public IPv4 address does not change anything to my home network.

In order to do that, IPv6 suggests that home networks use two sets of addresses: the public ones derived from the ISP-provided /56, and another private address space based on something else called a ULA (Unique Local Addresses). You get to choose your own ULA on your home network(s), preferrably based on a good random number generator, but nothing prevents you from taking something like fc00:caca:caca:caca:caca::/48. If anybody else on the Internet picks the same network prefix you will get into trouble when trying to get intimate with each other, e.g. by establishing a VPN between both worlds. We had exactly the same problem when trying to join two sites using IPv4 NAT’d 10.0.0.0/8 subnets, so this is not really a regression. Fun fact: if you have no ULA in France you can always say “Il manque ULA sur mon réseau”.

How do you get to choose this ULA? If you happen to have a single router on your home network it should just be a matter of digging through the router IPv6 setup until you find it. But most home networks are now running multiple routers that are all unaware of each other, and all convinced they are masters of the universe. You will most certainly end up with several ULAs. Some of your devices will get several addresses and you will have to understand your own network topology to know which address to use to access them. Prepare for glorious hours of debugging, which is particularly great when facing addresses that are mostly made of bloody random bits.

Why several routers on the same home network? Simply because you may be running several DSL connections, or maybe you have a VPN started somewhere away from your edge router, or maybe you connected your smartphone and it offers another potential exit to the Internet. You also get a virtual router when you start virtual machines on a desktop.

To make things simpler, every network interface on your machines will also generate a local address that is only valid for its closest neighbours, called a link-local address. Unfortunately you won’t go far with that one as it is not supposed to cross boundaries. Think of it as a 127.0.0.1 that extends to the other side of the cable but not further.

Ok so we have now several adresses for each machine on the network.  Figuring out which one should be used (incoming or outgoing) is just an unspecified, incredible mess. The link-local address can only be used on very specific physical links, the ULA address cannot be routed to the Internet, and the public addresses you have may change at any moment, e.g.  through your smartphone sharing a 4G access.

At that point we have just determined that your printer currently identified as ‘printer’ also known as 192.168.1.20 in IPv4 will now be accessible as:

– fe80:bffa:3d5f:5f8d:b4cf:1749:b01c:5b2f for machines directly connected to it through an Ethernet cable
– fc00:c465:3b76:b34d:38f7:da19:2586:1cbd for machines living on the same internal network.
– 2001:61af:ff44:b148:4fc3:0097:f35d:c806 for machines on the internet when reached through a first ISP, and another public address for each available ISP connection.

Oh joy.

Of course normal human beings are not meant to remember this kind of random shit. For this kind of thing you have DNS.

DNS you said? What DNS?

There are really two ways machines can obtain an IPv6 address: SLAAC and DHCPv6. SLAAC means Stateless Address Auto Configuration, whereby a machine obtains a prefix and derives its own IP address from it, e.g. based on its own MAC address. Cool, right? You do not have to assign individual addresses in static DHCP leases, every machine does it on its own. But then: how do you know which address was self-assigned by your very smart printer?

There are dedicated neighbour-discovery protocols for that, but they are mainly designed to make sure that addresses are locally unique and routers know where to find them. This is only taking care of establishing a link, there is nothing dedicated to associating a name to a self-assigned IP address. And if there was, how would you know who to believe? If two machines on the local network claimed to be ‘joe’, what should happen?

To be fair, there are solutions like Bonjour, also known as zeroconf, but they are unlikely to work on lightweight or old devices. Shoot again.

Back to square one: if you want to reach your own machines using human-usable names you need to run DHCPv6, a protocol that was designed to compensate for such things. And there you go: back to static leases, addresses assigned by a router, attached to a name, and you end up doing exactly the same kind of shit you used to do with IPv4 local networks, except this time the addresses are much easier to screw up.

Even worse: if the self-assigned IPv6 addresses are not related to MAC addresses, it means every single host on your local network will have generated its own random address, forcing you to manually harvest them from all devices. But you know how to do that on your connected toaster, right?

What’s in it for the average home network user? Pretty much nothing. The fact that every single one of your home devices has a potentially reachable address on the intertubes is downright scary. Internet service is for internet servers, not for sensors and other IoT bullshit. First thing you will want to do is bullet-proof your firewall to make sure nobody but you can access your printer from the Internet, and hope things are Ok with your IoT shit.

The story did not just end up with me reading thousands of pages on the Internet and a couple of paper books. I hacked every single computer in my house to run IPv6, starting with the routers under OpenWRT, LEDE, FreeBSD, OpenBSD, pfSense, OPNSense, and later moving on to all client OS machines: OSX, Linux, Android, *BSD, and even some Windows boxes, blimey.  I instantiated dedicated DHCP and DNS servers, configured static addresses, automatic ones, bridges and NATs and firewall rules and what-have-you, and I ended up with some machines working under IPv6 only, some under IPv4 only, some that could use both stacks, and some (a lot) that were just unreachable no matter what. Yeah, I also crashed my Internet access several times. Omelet and eggs.

Let me try to put it this way: some of my home machines are servers, e.g. a NAS or a printer. I want to be able to print on ‘printer’ or mount a share on ‘NAS’ without having to remember random 128-bit numbers. Silly me. Since I want to use names I have to assign addresses myself from a router running DHCPv6. Neither NAS nor printer need to be available to the public. So what did I gain compared to a local IPv4 network? Hmm… Address management is not fun with 32 bits, imagine with 128.

Or maybe I am just old-fashioned, trying to manually assign names to my home machines. This might be an idea for a new product: a router that would automatically identify hosts on the home network and show them on a single web interface, allowing you to assign names and forget about addressing altogether. Might get in trouble when you have several identical devices but I’m sure there would be a way. If such a product exists I have not seen it yet.

On the other hand, if I want to browse the Interwebs in v6, I found out that mounting a SOCKS proxy on a remote cloud box works perfectly well. No need to configure anything, just ssh -D and the IPv6 world is mine to browse.

Summing it all up

Address assignment is not easier than IPv4. Still requires a dedicated DHCP and DNS server, only more complicated to configure. You are facing the tedious task of gathering self-assigned IPv6 addresses from all hosts and copying them onto your DHCPv6 server, hoping the self-assignment method won’t change soon.

Routing is now different, but not easier. New constraints are imposed on knowing which interface to bind to when reaching out to the Internet.

Firewalling the whole thing with a mix of IPv4 and IPv6 might tear you a new one. I can already lock myself out of a router with human-readable firewall rules, I cannot imagine doing the same thing with batshit-crazy addresses and feel safe.

You know what? I will stick to glorious NAT’ing until this mess is sorted out. Good news is that there are many bright people currently working on the topic. All I hope is they eventually come up with something that you and me can use without having to read through a million pages of RFCs, compile obscure daemons, or purchase new boxes as if I did not have enough of them.

Talking about RFCs, this one is trying to gather very sensible requirements about home networks:

https://tools.ietf.org/html/rfc7368

If you have 20 minutes to spare, you should watch this talk:
https://www.youtube.com/watch?v=wQdfWUsG4uI

If you really insist on switching your home network to IPv6, I would recommend reading this rant first:

IPv6 at home (published 2012, still relevant):
http://www.kloepfer.org/ipv6-homenet.html

And to get an idea about how messy it is to get IPv6 configured on Linux:

IPv6 Set up an IPv6 LAN with Linux
https://www.jumpingbean.co.za/blogs/mark/set-up-ipv6-lan-with-linux

In its current state I can only dismiss the current IPv6 definition for home networks as very incomplete and unworkable for non-professionals.  Let’s hope RFC 7368 will be handled by qualified, creative, and pragmatic people.

Til then, there is no place like 127.0.0.1

Written by nicolas314

Tuesday 28 February 2017 at 11:41 pm

My own little farm

with 3 comments

zotac_ci323_03Virtualization is fun! Virtual Machines are nothing new, we have all been using VirtualBox, qemu, or VMWare at some point to try out new stuff, bring up the odd Windows instance to run annoying software, or whatever. At work we use thousands of VMs for millions of things. The hardware price tag is pretty hefty though: if you want to start a reasonable number of VMs on the same racked server you need very large amounts of RAM and disk space, placing it beyond reach in terms of price for home usage.

Not any more! Prices are dropping for heavy machinery faster than the time it takes to look up prices on Amazon. I found this little gem from Zotac and purchased one for a mere 180 euros from a French site:

Zotac CI323

The little beast sports a quad-core CPU, two Realtek NICs, and a whole bunch of USB ports (including two USB3). Add on top of that an extension card for WiFi and Bluetooth. Perfect choice to build a home router in a VM and leave space for other VM instances. You need to add RAM and disk, the box comes empty. I scavenged 8GB RAM and an SSD disk from a previous build and off we go.

It has been a while since I last had a look at virtualization solutions.  Took me several days to look them up individually and find out what they offer. All the solutions I tried are described below.

Option 1: run VirtualBox on a desktop

Install a convenient desktop like Mint or Ubuntu, run VirtualBox on top.  Unfortunately not a very good option as the VMs would not be as close to the metal as I would want. Dismissed.

Option 2: run Linux containers

Containers are neat but they are Linux only. I would like to run BSD and maybe Windows VMs too on the same hardware, so dismissed.

Option 3: Run a bare metal hypervisor

The main options I could find are:

  • VMWare: run VMWare OS as hypervisor, run any OS on top.
  • bhyve (pronounced like beehive), the FreeBSD hypervisor
  • Proxmox
  • KVM: use virtualization routines offered in the Linux kernel. This can be started from any Linux distro and conveniently run pretty much any OS.
  • Xen: use a Xen kernel as bare-metal hypervisor, run any OS on top.

VMWare ESXi was my first choice but had to be quickly dismissed: my box NICs are Realtek and VMWare dropped support for those a few versions back.  Annoying. There are convoluted HOWTOs explaining how to hack the install ISO to add missing drivers and stuff but I do not want to play that game. The whole setup would probably be broken in the following release so no thanks.

I installed FreeBSD 11 and tried out bhyve. Installing FreeBSD on this particular hardware was a real chore: for some reason the integrated SD card reader has driver issues and booting the machine took up to 10 minutes because of a nasty timeout spitting out kernel traces. I finally succeeded in disabling the driver on boot by adding stuff to device.hints after hours of googling and tests. To be fair, I have always faced issues with hardware support on FreeBSD, but to be completely fair: these are the only issues I ever faced. The OS is so polished and professional it is a real pleasure to use. Other parts of the box were immediately recognized and activated: Realtek NICs and the WiFi+Bluetooth (Intel) board.

Anyway: bhyve is relatively easy to learn, documentation is good enough, and it should run any BSD or Linux-based VM without any effort. Running Windows or OSX VMs would probably not be a good idea though. I have not tried but it seems a bit daring. If bhyve offered an easy-to-use GUI I might have stuck with it, but I finally dismissed it because it is still too young compared to other existing solutions.

KVM: the idea would be to install a very small Linux instance and use it to manage VMs on top with KVM. I tried several:

Ubuntu desktop is far too heavy for a “very small Linux instance”. I cannot believe a simple desktop is using so much RAM and CPU. I tried to manually remove stuff after a default installation and broke the machine most completely after having erased ‘evolution’. Forget it.

Ubuntu server is fine enough without GUI, but I would like to have a minimal X11 environment to run VM management software. Unfortunately, as soon as you start adding GUI stuff to an Ubuntu server you start piling up gigs of desktop software you do not want. I could probably figure it out but did not have the patience to do it.

Arch Linux is a royal pain to install. Manjaro (a fairly straight Arch derivative) gets you to a fully configured machine in a matter of minutes.  Problem is: I do want stability on my VM farm and a rolling release is probably not the best choice. Dismissed.

Minimal Debian install worked great. All hardware perfectly supported. And then I tried some KVM tutorials, messed up a bit further with Xen tutorials, and ended up with a completely borked machine. Don’t ask me what went wrong, I just got frustrated of randomly killing processes and rebooting the hardware. There are certainly good HOWTOs out there explaning how to transform a base Debian install into a Xen/KVM server but I did not find them. Dismissed.

Alpine Linux to run KVM: did not try, but seems like a possible option.

I tried Proxmox but the default ISO does not install, it crashes miserably after a few minutes of timeout. I have no idea what is going on, but I dismissed Proxmox at that point and came back to it later. Read on.

At that point I was left with Xen as bare metal hypervisor. I focused on Xen Server, a free Citrix project. The OS is based on CentOS 7 with a modified kernel and a GUI on top.

The XenServer install procedure is rather straightforward. Answer a few questions and let it roll. On the next reboot you get an ncurses-based interface on the console that allows you to achieve the bare minimum: configure the host, start/stop VMs, that kind of stuff. You can also do the same through ssh (ssh in then use xconsole).

Beyond that you need to find a Windows desktop because the only management solution they offer is a heavy Windows client. You get a very decent management interface that looks a lot like the VMWare Sphere client, from which you can control pretty much everything. The fact that it only runs on Windows is a major pain but to be honest: you only use it to configure new VMs. Once they are started you access them through ssh, vnc, or rdesktop, so no need to maintain a live Windows machine just for that.

In less than two hours I managed to install on XenServer:

  • A minimal Alpine Linux running nginx
  • An OPNSense instance
  • A pfSense instance
  • A Windows 8.1 desktop
  • A FreeBSD 11.0 VM, no X11

I still felt like something was missing though: XenServer would not recognize my WiFi/Bluetooth board. It would have been cool to dedicate a VM to make a stand-alone access point, so I kept trying more stuff.

Among all the options I tried, the only one that had all my hardware covered without hitch was Debian. Proxmox is based on Debian jessie, so if I succeed in installing it there should be a way to make things work. Let’s try again. I started from Debian and installed Proxmox on top. The guide I used is here:

https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_Jessie

This works and happens to be quite smooth.

NB: I managed to completely destroy my setup when I decided to change the host IP address without telling Proxmox first. Rebooting the machine does not help, it goes into an endless loop, fails to reconfigure the network, and dies in horrible pain. I took the shortest path and re-installed from scratch. Good advice: DO NOT CHANGE THE PROXMOX HOST IP ADDRESS.

Proxmox is now working beautifully well. The advantages over XenServer for me are multiple:

  • LXC + KVM support: Proxmox supports LXC containers and KVM Virtual Machines in approximately the same way. Of course, containers are much lighter to install, start up, shut down, or backup.
  • Proxmox is completely open-source. XenServer probably has proprietary parts somewhere, though I did not investigate more than that.
  • Proxmox offers a pure Web interface: no need for a heavy Windows client.  You can also open a VNC console on any virtual machine directly from your browser, which is incredibly convenient.
  • Based on Debian, Proxmox identified and supports all my hardware.

Just for fun, I created a local WiFi access point based on alpine Linux by instantiating an LXC container, assigning the wlan0 interface to it, and booting the right daemons.

The next VMs I created are:

  • An alpine Linux desktop under LXC
  • Various alpine Linux boxes under LXC to run simple services
  • An Ubuntu desktop (under KVM)
  • A Windows 8 desktop (under KVM)
  • A MacOS Sierra desktop 
  • pfSense and OPNSense as KVM appliances, to evaluate them
  • An OpenBSD box to play with pf in command-line mode
  • A FreeBSD11 box

All these virtual goodies run on the same hardware as I write these lines.

My next task will be to select a solution to use as a home virtual firewall appliance. Meanwhile I am just having fun popping up and down virtual machines as my mood goes.

Completely useless but tons of fun!

Written by nicolas314

Tuesday 8 November 2016 at 3:43 pm

OpenWRT on EdgeRouter Lite

with one comment

erlite-3-900x500Installing OpenWRT on EdgeRouter Lite

This installation procedure does not require any extra hardware beyond a Phillips screwdriver to open the router box. I believe it is completely reversible and (hopefully) does not void your warranty.

Objective: replace EdgeOS on your EdgeRouter Lite by a recent version of
OpenWRT picked from the LEDE project.

You need:

  • A powerful Linux box, preferrably a multi-proc 64-bit machine with tons of RAM. This will only be used just once to compile OpenWRT.
  • An extra USB thumb drive. Its dimensions should be short: less than a few centimeters, otherwise it won’t fit inside the box.

Ok now off to build an OpenWRT image:

. Log onto your Linux box

. Download the latest sources from LEDE project:

 git clone https://github.com/lede-project/source
 cd source

. Prepare the tree for compilation for ERLITE:
You will have to select a number of options to build an image tailored to
the EdgeRouter Lite. I could put here a ready-made config file but as these
things tend to evolve quickly, it would probably be obsolete in a matter of
days. So bear with me: start the configuration with

make menuconfig

. Target System: Cavium Networks Octeon
. Target Profile: Ubiquiti EdgeRouter Lite

. Target Images: make sure ‘ext4’ is selected, then Select that line to
open up a menu for ext4 configuration. Change the number of inodes to
60,000 instead of the default 6,000. Also select GZip images, and finally modify the root filesystem partition size to something more comfortable,
say 500 MB. This space will be taken off your USB stick so if you have
more space you can increase that to whatever you have. With 500 MB you
should have enough space to put all the packages you need.
. Global build settings: select Select all kernel module packages by
default.

Beyond that take your pick for packages you want included by default in
your image. My selection is:

  • Base system: base-files, block-mount, busybox, ca-bundle, ca-certificates, dnsmasq, dropbear, firewall, fstools, jsonfilter, lede-keyring, libc, libgcc,
    libpthread, librt, libstdcpp, mtd, netifd, opkg, procd, rpcd, sqm-scripts, sqm-scripts-extra, swconfig, ubox, ubus, ubusd, uci, usign
  • Administration: sudo
  • Development: ar, binutils, gcc, gdb, make, objdump
  • Kernel modules: everything should already be selected as module. You want
    to change some of these to be compiled into the kernel otherwise it will
    fail to find the ext4 root on USB:

    • Filesystems: select kmod-fs-ext4, kmod-fs-msdos
    • USB Support: kmod-usb-core, kmod-usb-storage, kmod-usb-storage-extras
  • Languages: select whatever programming languages you want to see in a
    default install. I usually make sure at least Lua and Python are selected.
  • LuCI: make sure LuCI is selected. Take your pick for applications you
    want to install. I usually select luci-app-openvpn, luci-app-commands,
    luci-app-firewall.
  • Network: if you want your router to act as an OpenVPN client or server,
    make sure it is selected under VPN. Pick either openvpn-openssl or openssl-polarssl.
  • Utilities: bash, bc, file, grep, gzip, less, lsof, openssl-util, strace, tar, tmux, usbutils

Feel free to select more packages but each additional one will take extra
compilation time.

. Type ‘make’ and let the magic go on.
. When finished, the result is stored as:

bin/targets/octeon/generic/lede-octen-erlite-ext4-sysupgrade.tar

This file contains everything we need to build a bootable USB drive for
the EdgeRouter Lite. This file should be 500 MB large since you
selected that size above for your root filesystem, but it is mostly made
of zeroes so if you use bzip2 you should be able to reduce its size to a
more manageable 50-60 MB, which is more convenient if you need to toss it
around the network.

. Put the sysupgrade.tar file onto a local Linux machine and extract it:

tar xvf lede-octen-erlite-ext4-sysupgrade.tar

. The directory contents are:

sysupgrade-erlite/
sysupgrade-erlite/kernel
sysupgrade-erlite/root
sysupgrade-erlite/CONTROL

. Now insert your USB thumb drive into a local Linux machine and prepare
the filesystem. We need a first (small) FAT32 partition to hold the kernel,
a second 500 MB partition to hold the root:

fdisk /dev/sdX # Where X is the letter assigned to your USB drive
New partition: 1, 32 MB in size, type c (WIN95 FAT32 LBA)
New partition: 2, 500 MB in size, type Linux (default)
Optional: New partition: 3, the rest of your drive, type Linux (default)
Make the first partition bootable (a).
Type 'w' to save your changes.

Create the FAT32 filesystem with:
mkfs.vfat /dev/sdX1

. The default uboot configuration on the EdgeRouter Lite wants a file
called ‘vmlinux.64’ in the first (DOS) partition, so let’s do just that:

mount /dev/sdX1 /mnt # Mount the DOS partition
cp sysupgrade-erlite/kernel /mnt/vmlinux.64
umount /mnt

. Dump the root filesystem contents onto the second partition:

dd if=sysupgrade-lite/root of=/dev/sdX2 bs=1M

. If you have a third partition, create a new filesystem on it with:

mkfs.ext4 /dev/sdX3

. You are done with the USB drive!
. Open the EdgeRouter Lite. There are three small screws to remove on the
back. The box slides open if you push gently.

. Remove the existing USB stick inserted in the reader on the motherboard.
Be gentle: you need to insist a bit to take it off but it is not stuck.

. Insert the USB drive you prepared. Close the box, put the screws back,
and boot the router.

. If you connect a PC to the central NIC (labeled eth1) you should receive
an address on 192.168.1.0/24 from which you can ssh to 192.168.1.1 or open
a browser to http://192.168.1.1

. Welcome to OpenWRT/LEDE! Set a root password and you should be done.

The first things you probably want to do:

  • Change interface names to associate eth0 to WAN, and bridge eth1 and eth2 to
    LAN.
  • Edit the configuration to mount the third partition on the USB drive on /home. This is cool to add non-root users and give them a real flash storage.
  • Run ‘opkg update’, install missing packages.
  • Install OpenVPN configurations and test them.
  • Add ssh keys for root login in /etc/dropbear/authorized_keys
  • Install dotfiles to feel at home

Problems I have seen and their solutions:

The LEDE build is not so robust, sometimes it fails in parallel mode
because some dependencies seem to be compiled too late. If you get
compilation errors, using ‘make -j1’ should solve all issues. On a powerful
server with tons of RAM you need 2-3 hours to compile the whole set,
depending on how many packages you selected.

The version of LEDE you just compiled will quickly be out of sync with the
official package repositories. As soon as the kernel is changed in the LEDE project HEAD, all kmod packages from the LEDE repositories will refuse to install with opkg. This is the reason why you had to select “Build all kernel modules” in menuconfig: all kernel modules are already part of the image you created. This problem should go away once LEDE has released its first stable version.

I had most trouble with the ext4 filesystem definition: my first attempt
generated an ext4 of 50 MB which is far too small. After increasing that
size to 1GB, I still ran into “not enough disk space” errors and figured
out the number of inodes was too low (6,000). If you install a lot of
packages you need more inodes. Both points are addressed in the above
procedure. I also tried with an insanely high number (600k inodes) and the
resulting filesystem cannot be mounted.

Filesystem size is indicated in bytes, but fdisk counts in MB, based on a
power of 2. This yields a small discrepancy between the 500MiB filesystem
you generated with the build and the 500MB you reserved in the partition
table.

Once up and running, my router quickly ran into starvation problems. One
machine on the network could use the whole bandwidth and cut off every
other machine. I installed QoS packages: sqm-scripts, sqm-scripts-extra,
and luci-app-sqm, configured the queue to a fair scheduler, and got rid of
starvation issues. For some reason I could not get the pre-compiled
versions of these packages to work, I had to re-install them from official
repositories.

I wanted to add a Samba server to be able to use the rest of the USB drive
as a shared space but it is not a good idea. Samba takes ages to compile
and the daemon uses too many resources for such a small piece of hardware.

If you want to add ssh keys for the root user, remember the default ssh
server is dropbear, not openssh. dropbear expects root ssh keys to be
stored in /etc/dropbear/authorized_keys. You can also add root ssh keys
through LuCI.

The default shell for root is /bin/sh. You can change it to /bin/bash after
installing it and modifying the root entry in /etc/passwd.

Enjoy your fancy new router!

 

Written by nicolas314

Sunday 16 October 2016 at 4:58 pm

Samba+FreeBSD+OSX Finder

leave a comment »

Tried to run Samba44 on FreeBSD to share files on the local network with OSX machines. Took me a while but I finally figured out that in order to get rid of the infamous ERROR -36 from the Mac Finder, you have to disable sendfile with:

use sendfile = no

As a matter of fact, let me post a complete working example of smb4.conf that works between my FreeBSD-hosted Samba and my Mac (Yosemite):

[global]
# Following line is useful on Linux, not on FreeBSD apparently
# socket options = TCP_NODELAY IPTOS_LOWDELAY SO_RCVBUF=131072 SO_SNDBUF=131072
realm=*
max protocol=SMB3
winbind enum groups=yes
winbind enum users=yes
large readwrite = yes
max xmit=32768
min receivefile size=2048
use sendfile = no
aio read size = 2048
aio write size = 2048
write cache size = 1024000
read raw = yes
write raw = yes
getwd cache = yes
unix charset = UTF-8
oplocks = yes
workgroup = HOME
server string = PO
security = user
map to guest = Bad User
log level = 5
log file = /var/log/samba4/samba.log
max log size = 50
interfaces = 192.168.0.0/16
hosts allow = 127.0.0.1 192.168.0.0/16
hosts deny  = 0.0.0.0/0
#dns proxy = no
# I do not want Samba to serve printers, thanks
printing = bsd
printcap name = /dev/null
guest ok = yes
guest account = nobody
load printers = no
printing = bsd
printcap name = /dev/null
disable spoolss = yes
ea support = no

# Share the directory in /data/share as 'share'
[share]
path = /data/share
public = yes
guest ok = yes
writable = yes
browsable = yes
create mask = 0777
directory mask = 0777
force user = nobody
follow symlinks = yes

In addition, OSX does not support guest access to an SMB server. The simplest solution I found was to set up a user called ‘nobody’ without password:

smbpasswd nobody
-> Press ENTER twice to set no password

And then access your share by pointing your Mac Finder to smb://nobody@HOST/share and click Connect.

Note to self: never ever install a Samba server after 7pm if you want a full night sleep.

 

Written by nicolas314

Wednesday 12 October 2016 at 12:01 am

EdgeRouter Lite

with 3 comments

erlite-3-900x500

My endless search for the ideal home router made me buy a piece of hardware called EdgeRouter Lite by Ubiquiti. The price point is sweet (around $100), making it a damn expensive home router or a damn cheap professional one. For that price you get:

  • A Cavium Octeon processor: 500MHz, two cores, rated 1000 bogomips, MIPS64 architecture, big-endian.
  • Half a gig of RAM
  • Three GBit NICs
  • No wireless
  • No fan, no noise
  • OS completely contained on an easily accessed USB stick on the motherboard, so essentially as much drive space as you want.

The last point is the most important: by just removing three small Phillips screws you can unplug the original USB thumb drive and replace it with your own, equipped with your favourite operating system. If everything fails you can always switch back to your previous state, put the screws back and call it a day. That should not void your warranty but I am no lawyer.

The provided operating system is called EdgeOS, based on Vyatta, itself based on Debian. It seems Vyatta development is now halted and Ubiquiti is now steering EdgeOS alone. I used EdgeOS on that router for about six months and have to admit being rather satisfied. The router is sold as the fastest switching home appliance on the market, claiming 1 million packets per second. In order to reach that kind of speed with a (dual-core) 500MHz processor on three GBit NICs you need additional specialized hardware that is only available through proprietary drivers provided with EdgeOS. So be it.

I have a beef with proprietary router firmware though: each vendor seems to feel obliged to invent their own management language. Cisco, Mikrotik, Ubiquiti, you name it. Everything is meant to be controlled from the command line, which is great, but instead of navigating through a familiar Unix environment you need to learn half a million new (proprietary) commands, their syntax, side effects, and how to commit, save, or restore configurations.  This is a royal pain in the butt and I have no desire to go get some training to configure a home appliance.

To be fair, open source versions have had the same issue for years, though some made a huge effort to provide good web-based GUIs for configuration and avoid having to invent a new configuration language altogether. Tomato and DD-WRT have really pushed things forward to reach a decent level of user-friendliness. You only need to know about networking and do not have to worry about learning yet another obscure syntax to handle those.

Too bad: both projects seem to be pretty much abandoned today. DD-WRT has not seen a stable release in almost a decade and Tomato still courageously lives on, maintained by a handful of dedicated devs working from home. The communities for Tomato and DD-WRT are dwindling fast in favour of OpenWRT.

OpenWRT is the most advanced open source router project today. It is well designed, based on a single syntax for configuration files, and supports pretty much every piece of router hardware under the sun. The project was recently forked by its own developers into the LEDE project, which is now the version I am following as closely as possible.

Back to the EdgeRouter Lite: what’s not to love?

Beyond the proprietary software and syntax, EdgeOS offers a web-based GUI that looks fancy and neat but covers only a very, very limited portion of what can be achieved through a command-line interface. This is very frustrating. I love command lines as your next geek but don’t force me to learn a syntax I will use nowhere else just to achieve mundane stuff.

After six months of customizing my home router to my own needs, I had gathered scripts lying around e.g. to extract a list of known MACs or some stats.  And when I updated EdgeOS to another minor version, everything fell apart.  That irked me to no end, pushing me once more into the arms of an open source alternative.

Support for alternative firmware for this router is not obvious to find.  OpenWRT has an incomplete wiki page about it. A couple of guys have succeeded in installing FreeBSD but I did not feel up to the task. Debian supports big-endian MIPS64 machines, and a project called DebWRT offers support for this router, merging both Debian and OpenWRT in a single solution. This is cool but I am a bit terrified about using a straight Linux distro to build a router. If all I have to handle iptables is a bash shell and miles of manual pages, this is not going to work, I hate the iptables syntax with a true passion. The unique config file format used by OpenWRT is a real blessing, there is no way I am going back to one config file format per daemon.

So I started from scratch, built my very own version of a LEDE instance, including all the software I want to run on this box. The process is error-prone and it took me several evenings to get straight. In order not to lose information, I will be detailing everything I did in a next post, hoping it could be useful for someone else.

The net result is a pure LEDE box that has been running without hiccups for a few days now. Configuring routes, VPN, DHCP, DNS is a walk in the park thanks to user-friendly OpenWRT. All my scripts are working again, I can handle backups myself, and I even installed dedicated web and Samba servers. Next step will be to install an ad-blocking name server.

I am certainly losing in terms of performance but I won’t see the difference. Without proprietary drivers, hardware acceleration is gone.  This should not be an issue considering my home GBit network is currently handled by a separate switch and my Internet connection is limited to a mere 20MBit/s, magnitudes below what the router needs to provide. The day I get a GBit Internet connection at home, I will always have a choice to switch back to EdgeOS with just one unplug/plug of a USB key. Or maybe someone will have reverse-engineered the proprietary drivers by then?

There is one alternative I have been looking deep into: using pfSense or OPNsense to build my own firewall. The approach sounds good. I believe the BSD family is technically a lot better than anything Linux-based. This is particularly true in terms of network security software.

Trouble is: pfSense/OPNsense is extremely greedy. You can build a 15 euro router with OpenWRT but you need PC-sized gear to run pfSense, including at least 1 GB of memory and a lot more than mere megabytes of storage (OpenWRT fits in just 4 megs). The cost of a pfSense appliance can easily run in 400-500 euros, which does not make any sense from a budget point of view.  Most people going down that road recommend re-purposing an old PC for the task, but I have absolutely no intention of storing a hungry 300W loud old PC box next to my 20Mbit/s DSL modem, this would be insane.

There lies the whole beauty of this exercise: find the cheapest, least power-hungry, and most efficient way to set up a home routing solution that is easy and fun to configure, flexible enough, and secure. I stopped building my own PCs years ago and cover that need now by building small appliances from scratch, compiling the whole OS myself.

Tinkering is fun!

Written by nicolas314

Wednesday 5 October 2016 at 10:03 pm

Wunder Weather

leave a comment »

wunderJust released this small piece of code a few days back:

https://github.com/nicolas31/wunder

I wanted to be able to bring up the weather forecast for the place I am currently visiting without having to yield my address book to a shady app, or suffer from tons of annoying ads eating through my data plan and phone storage.

The Yahoo weather app is fantastic but has too many ads. Weather web sites are incredibly data heavy, making it nearly impossible to get right to the information I am looking for: is it going to rain today or tomorrow? Expected temperatures?  Android has some ad-less widgets but they usually request GPS positioning and I’d rather not activate location services when I don’t need them.

So I hacked something. Made a web app that identifies your position by geolocating the requester’s IP address, obtains the weather forecast from a reliable source, and displays the only weather information I need on a fast loading page.

First issue: geolocalize an IP address.

There are many free services on the net to achieve that. Alternatively, you can download a static list and refresh it at regular intervals, but I wanted to get something a bit more dynamic. I chose:

http://ip-api.com

Their API is dead simple and just works. Provide an IP address, get a country code, city name, latitude and longitude. You do not need to subscribe to their services, just make sure you are not choking them with too many requests.

Second issue: find a reliable weather source.

I fist tried openweathermap.org. This is a very cool site but has a few shortcomings:

You can get the weather for a given [city, country] or [lat, lon]. The list of supported [city, country] pairs is static and can be downloaded from their web site. While they do support a lot of cities in the world, the problem was figuring out how to match [city, country] between what is returned by ip-api.com and what is understood by openweathermap.org. The matching is not 100% accurate.

Getting the weather by coordinates would work but it is far from user-friendly.  You end up with Weather forecast for location Lat=XX Lon=YY. I’d rather look up the weather for San Francisco than for a pair of coordinates that are not obviously recognizable.

I ended up looking up [city, country] by computing the smallest distance on the openweathermap list, but that is just tedious and a lot of work for very little gain.

Other major issue: the weather forecast is only provided GMT, which is utterly useless. What I want is local time, always. What do I care if I am told that it will rain from 2 to 5am GMT if I cannot relate that to local time?

Figuring out a conversion between GMT and local time is a lot trickier than it looks. Thanks to Daylight Saving Time rules that are changed at random intervals in various countries, it is very hard to predict the time offset in some places more than a couple of weeks ahead. Relevant:

A bit of googling around revealed there is an actual API from Google Maps to convert a Unix time stamp + latitude and longitude to a local time. This API takes into account local DST rules at the considered date/time, which is exactly what we want. No need to register with Google, as usual the API is free to use and rate-limited.

Example code can be found here: https://github.com/nicolas314/tz

In summary: getting the weather from openweathermap would require:

  • One external API call to associate IP to [lat, lon]

  • A search to associate [lat, lon] to [city, country]

  • One external API call to obtain actual weather data

  • One external API call to convert GMT to local time

I have implemented that and the result is ugly. Ok let’s see if we can find something smarter.

Next try: wunderground.com

They also offer an API to obtain weather data for any place in the world and they take care of two things: converting [lat, lon] to [city, country], and converting weather forecast to local time. This is exactly what we want.

Their API can also take care of geolocating an IP address but I found their results to be a lot less reliable than what I get from ip-api.com, so will stick to that for geolocation.

Their terms and conditions are fair. You need to register with them to obtain an API key and that’s about it. Results are delivered in metric units and can be localized in several languages. You also get a pointer to icons symbolizing the weather, which is perfect to generate a nice web page effortlessly.

Some comments about my implementation:

Results from wunderground contain a whole bunch of information I am not interested in, like temperatures in Farenheit. Not an issue: the Go JSON API allows defining fewer fields than what is parsed, so you can keep your structs small with only relevant data.

When running behind a reverse proxy, the incoming requesting IP address you see is the one for the proxy. In order to get the real incoming IP address you need to configure the reverse proxy to pass it along, usually in an HTTP header. Since I am running this service behind nginx, I get the address from X-Real-IP. That is probably different for each reverse proxy out there.

Hardcoded handlers are provided to take care of requests for /favicon.ico and /robots.txt. I was tired of seeing 404 requests in my logs for these two.

Results are cached by IP address for one hour to avoid flooding upstream API services with requests. Results are displayed from a template that can easily be tweaked. The one I wrote fits nicely enough on both mobile and desktops, your mileage may vary.

I installed the end result on a tiny VPS instance, for my own use. Hoping that could be useful to somebody else.

Written by nicolas314

Tuesday 16 August 2016 at 1:46 pm

Posted in go, programming

Tagged with , ,

Printers from Hell

leave a comment »

printer-icon--clipart-best-30My first printer was a black-and-white Samsung laser device that caught my eye in a brick-and-mortar shop by sporting a brave Linux sticker on the front side. These were the early 2000s, mind you, and finding hardware that openly advertised Linux compatibility was quite unusual. The same sticker also showed a Windows and Apple logo, to be honest.

Linux compatibility in those times meant that you were a mere 37 friendly steps away from printing a page, which is quite an achievement if you remember that Linux printing in those ages was still reserved to a very small number of gurus worldwide. You needed to re-compile your kernel, install the correct kind of USB support, a variety of device drivers, a couple of daemons, a postscript interpreter, some fonts, and all the extra cruft that comes bundled with those packages. Under Debian this goes as a simple apt-get but RPM-based distros were less automated. You had to dig through dependencies yourself and keep throwing packages at your hard drive until the damn thing stopped complaining. Tedious. And then you could finally start editing the printer configuration files. If you have never configured these horrors, think of a mix between sendmail.cf and procmailrc.

Since then CUPS has changed the game. Config files are just as obscure but are now XML-based (now you have two problems). CUPS forces you to assign network permissions to access your very local USB device, requiring at least some knowledge of basic network security to be able to print a page.  You also had an option to configure CUPS through a web interface, but that also required manipulating some networking rules and authorizations in an XML file to just bring up the page. Oh the joys of manual CUPS configuration!

Cue to 2007: I purchased a combined color printer and scanner from HP for close to nothing, hoping I would not need the ink. Instead of hooking it into a Linux box, I decided to take the easy path and attach it to a Mac.  This is still running CUPS but at least things are a bit automated with Apple: click “Add Printer” and follow the steps until it says it works.  Surprisingly enough, you needed to delete and re-add the printer every time you upgraded the OS, but thanks to the provided wizard this was not such a pain.

This HP stuff did not work so well. It printed alright, but the cartridges seemed to empty at lightspeed. The scanner was incredibly slow, it could not be controlled from any other machine than the one connected through USB. Better than nothing, I suppose.

Cue to 2014: I saw this fantastic discount for a multi-function color printer and scanner from Epson, for a mere 40 euros. That is about the price of ink cartridges for other printers. Count me in! I had the thing delivered to my place the next day.

Without having read any of the documentation I tried to get the printer to work while attached to a Windows PC. Several hundred megs of software were downloaded, installed, configured, and the results were appalling. The printer would not connect half of the time, I managed to print a test page and that was it.

And then I saw the WiFi logo on the box. WiFi? Sure enough, there was also an RJ45 plug on the back. Still armed with my best pioneer spirit, I painfully configured a WiFi connection on the printer itself — on a tiny 2-inch screen — and lo and behold: the printer became immediately available on the local network for everyone’s enjoyment! Seems this time they stuffed the complete CUPS layer directly inside the printer, and it actually worked! Oh wow.

Ok, it does not always work. You still need to delete a re-configure the printer every now and then but it has become a lot less painful than kernel re-compilation or XML editing.

I really can’t complain. Printing now works from pretty much any machine at home without having to install a metric ton of crapware and keeping it updated. The same machine also offers a “Scanning to the cloud” option that sometimes works. It seems every document I scan for myself has to be first sent to China for approval before it can reach my email or my Dropbox account. Just don’t scan stuff when the Chinese guy is having a cigarette pause. As a better option, I put a USB stick into the printer, scan and save onto it, and manually carry the stick back to a computer. And then I lose the USB stick, and it takes me ages to figure out where it is, but that part is only my fault.

I was not so lucky at work. Configuring a printer on a laptop is still as cumbersome as ever. To be fair, I have never seen a printer configuration task use less than three experienced engineers to achieve. Repeat every time the document you want to print is important and on a tight deadline. In many cases I completely gave up configuring a work scanner. Life is too short.

The time is 2016 and we still have not figured out how to print and scan easily in a standard household or office. As a friend of mine recently told me: “If we ever reach the Singularity one day, we’ll just ask the AIs to configure a printer. That should buy us enough time to invent spaceships and leave the planet before they enslave us all.”

See also: http://theoatmeal.com/comics/printers

Written by nicolas314

Friday 12 August 2016 at 4:25 pm

Posted in printers, Uncategorized

Tagged with , ,

OpenWRT on Ubiquiti AC Lite

with 4 comments

unifi-ac-lite

Stealth AP with hidden logo

A year ago, I thought I had upgraded my home WiFi to AC with the purchase of a cheap TP-Link Archer C5, but it only gave me trouble. The 2.4GHz band is working perfectly, the 5GHz not so. The first version of OpenWRT I installed had no support at all for the 5GHz mode, I had to dig out from the openwrt mailing-lists that something was under way to add support for it, find the right rxxxx release, compile it myself and install it, only to be disappointed. In the end I got it almost working: network would drop every minute or so, and the range would not exceed a few meters away from the access point. Quite worthless.

Ars Technica reported last year about switching from consumer-grade WiFi access points to professional ones here:

Ars Technica review of Ubiquiti Access Points

Took me a while to finally give up on the Archer C5 and decided to get the cheapest Access Point from Ubiquiti: the Unifi AC Lite unit.

Several things play in favour of this access point: it is designed to be hanging on a wall or a ceiling for a better wave spread, offers much higher power than your usual consumer-grade WiFi device, and the firmware is in the hands of Ubiquiti so they are taking care of making sure the unit is performing as it should. Or so I thought.

First surprise: the device cannot be started out of the box. You need to download the Unifi Controller software, a 200MB piece of java software that is meant to control the unit. Install the Controller on any computer on your local network, start it up, let it discover your access point, and you are good to go. The Controller starts up a local web-based interface which is accessed via a local web browser.

Ubiquiti designed it this way because this product is not meant to be purchased as a single unit but in batches and installed in hotels or offices. The Controller software has obviously been designed to maintain a long list of such access points. Having just one looks a bit ridiculous but Ok, I’ll play along.
To be fair: if you do not want to install the Controller locally you can also run it from an internet box (look Ma, it’s the cloud!). For companies or hotels who do not have servers on site it is an excellent idea but for a single AP this is largely overkill. I am not keeping a java server up full time on the Internet to manage an Access Point in my living-room. And if you do not want to run the Controller at all, you can also just start it whenever you need to modify your AP configuration and shut it down afterwards, which is what I did.
I mounted my AP on a top shelf, ran the initial configuration, and was delighted to see a double-band WiFi emerge immediately. Strong signal everywhere in the house, no connection issues, great! A welcome enhancement to my home network.

The only notable modification I brought was to stick a white Apple sticker on the Ubiquiti logo to hide it. The point was not to rebrand it as an Apple product but to hide this ugly U and this was the only sticker I had available that day. The Ubiquiti guys are probably not aware that they have the very same logo as a cheap French supermarket brand and I got tired of seeing that prominently displayed in my living-room.

After about a month, a few things started bothering me though. The AP seemed to have trouble waking up after losing power. First boot would report everything fine but WiFi would drop every connection immediately.  Rebooting the AP solved it every time. Did not feel too good about this.

Nmap’ing the unit revealed an open ssh port, which accepted the admin credentials that were set on the controller software. Once logged in, I found myself in front of some kind of heavily modified OpenWRT. Interesting… So is there anything on the Ubiquiti web site about this modified OpenWRT? After all, OpenWRT is under GNU license (v2) so I expected to find sources, some kind of build system, or anything related to the modified OpenWRT version running on my Access Point, but I could not find anything, at least nothing obvious from the Ubiquiti web site. Bad point for Ubiquiti but I am not a lawyer.

Nosing around the logs for some explanation for the needed reboots, I found nothing obvious. What I found was that a process was spitting a few lines every five seconds about having no contact with the Ubiquiti Controller software. Several questions on the Ubiquiti forums on that topic were answered with: “yes we know, this is a low-priority fix, just ignore it for the moment”. I have to say that just pushed me over the edge. I understand that my case is not the general use case and I truly do not blame Ubiquiti, but this is not the way I want my AP to work. I do not care much about log files filling up with useless messages but the lack of interest for single-AP users like me is disturbing.

So off to installing a real OpenWRT firmware this time. I finally got it working but it took me a whole afternoon of research to do so, which is summarized here in case it can be useful to somebody else.

First: the current (3.7) firmware embeds an RSA signature check preventing any attempt to install non-Ubiquiti firmware. This is probably due to the recent FCC firmware lockup rules. While I do understand the FCC concerns and the important role they play in Homeland Security, I regretfully do not feel obliged to follow US rules on European ground. This is my hardware now, if I decide to mess it up with another firmware I should be able to do so.

Solution: downgrade the firmware to version 3.4, which is signed by Ubiquiti and does not check firmware update signatures. This firmware can be found by googling a bit around, I got a working URL and the complete procedure from this page:

LEDE/OpenWRT for Ubiquiti UniFi AP AC (LITE + LR + PRO)

While we’re at it, it may be a good idea to switch from OpenWRT in favour of the recent project fork called LEDE project. Seems they have added official support for this hardware and the documentation seems a lot cleaner, though very, very incomplete for the moment, which is perfectly Ok for a two-week old project.

If you did not follow what happened to the OpenWRT project recently, you may be interested in learning a bit more about why the team forked:
https://lwn.net/Articles/686767/

Some pre-built images are available from the LEDE project site, but I chose to go all the way and clone the github repository, configure the build to include the software I need, and recompile everything myself. On a beefy x64 server this took about 2 hours and 11GB of disk space, ending up with a 3.3MB image that was happily installed in a single command on the downgraded Access Point.

The default configuration is smart: the AP tries to obtain an address for itself through DHCP on its only wired interface and acts as a bridge for wireless clients, making it immediately operational when connected to a network equipped with a proper DHCP server. Sweet.

Net result: my Unifi AC AP is now completely stand-alone. I happily removed the Ubiquiti Controller software and customized my AP to death with various scheduling and logging scripts. Wireless range in the 5GHz band covers the whole house and the 2.4GHz allows me to walk outside during Skype calls without losing signal. No more spontaneous rebooting, my logs are clean, and most importantly: I feel empowered :-)

I have to say I do not see Open-source firmware disappearing any time soon.  For tinkerers like me who like to have complete control over their network, this is absolutely brilliant.

Edit: 2016-12-09 Adding some config files and diagnostics to this page, might be helpful if you are trying to replicate this

/etc/config/wireless defines two access points: ap24 for 2.4GHz and ap50 for 5GHz, with passwords SECRETPASSWORD.

config wifi-device 'radio0'
    option type 'mac80211'
    option hwmode '11g'
    option path 'platform/qca956x_wmac'
    option txpower '20'
    option country 'FR'
    option distance '50'
    option channel '3'

config wifi-iface
    option device 'radio0'
    option network 'lan'
    option mode 'ap'
    option ssid 'ap24'
    option encryption 'psk2+tkip+ccmp'
    option macfilter 'deny'
    option key 'SECRETPASSWORD'

config wifi-device 'radio1'
    option type 'mac80211'
    option hwmode '11a'
    option path 'pci0000:00/0000:00:00.0'
    option htmode 'VHT80'
    option txpower '20'
    option country 'FR'
    option distance '50'
    option channel '136'

config wifi-iface
    option device 'radio1'
    option network 'lan'
    option mode 'ap'
    option ssid 'ap50'
    option encryption 'psk2+tkip+ccmp'
    option key 'SECRETPASSWORD'

Here are the outputs of iw phy0 info:

#iw phy0 info
Wiphy phy0
 max # scan SSIDs: 16
 max scan IEs length: 199 bytes
 max # sched scan SSIDs: 0
 max # match sets: 0
 Retry short limit: 7
 Retry long limit: 4
 Coverage class: 0 (up to 0m)
 Device supports AP-side u-APSD.
 Available Antennas: TX 0x3 RX 0x3
 Configured Antennas: TX 0x3 RX 0x3
 Supported interface modes:
 * managed
 * AP
 * AP/VLAN
 * monitor
 * mesh point
 Band 2:
 Capabilities: 0x19ef
 RX LDPC
 HT20/HT40
 SM Power Save disabled
 RX HT20 SGI
 RX HT40 SGI
 TX STBC
 RX STBC 1-stream
 Max AMSDU length: 7935 bytes
 DSSS/CCK HT40
 Maximum RX AMPDU length 65535 bytes (exponent: 0x003)
 Minimum RX AMPDU time spacing: 8 usec (0x06)
 HT TX/RX MCS rate indexes supported: 0-15
 VHT Capabilities (0x338001b2):
 Max MPDU length: 11454
 Supported Channel Width: neither 160 nor 80+80
 RX LDPC
 short GI (80 MHz)
 TX STBC
 RX antenna pattern consistency
 TX antenna pattern consistency
 VHT RX MCS set:
 1 streams: MCS 0-9
 2 streams: MCS 0-9
 3 streams: not supported
 4 streams: not supported
 5 streams: not supported
 6 streams: not supported
 7 streams: not supported
 8 streams: not supported
 VHT RX highest supported: 0 Mbps
 VHT TX MCS set:
 1 streams: MCS 0-9
 2 streams: MCS 0-9
 3 streams: not supported
 4 streams: not supported
 5 streams: not supported
 6 streams: not supported
 7 streams: not supported
 8 streams: not supported
 VHT TX highest supported: 0 Mbps
 Frequencies:
 * 5180 MHz [36] (20.0 dBm)
 * 5200 MHz [40] (20.0 dBm)
 * 5220 MHz [44] (20.0 dBm)
 * 5240 MHz [48] (20.0 dBm)
 * 5260 MHz [52] (20.0 dBm) (radar detection)
   DFS state: usable (for 2032442 sec)
   DFS CAC time: 60000 ms
 * 5280 MHz [56] (20.0 dBm) (radar detection)
   DFS state: usable (for 2032442 sec)
   DFS CAC time: 60000 ms
 * 5300 MHz [60] (20.0 dBm) (radar detection)
   DFS state: usable (for 2032442 sec)
   DFS CAC time: 60000 ms
 * 5320 MHz [64] (20.0 dBm) (radar detection)
   DFS state: usable (for 2032442 sec)
   DFS CAC time: 60000 ms
 * 5500 MHz [100] (27.0 dBm) (radar detection)
   DFS state: available (for 2032379 sec)
   DFS CAC time: 60000 ms
 * 5520 MHz [104] (27.0 dBm) (radar detection)
   DFS state: available (for 2032379 sec)
   DFS CAC time: 60000 ms
 * 5540 MHz [108] (27.0 dBm) (radar detection)
   DFS state: available (for 2032379 sec)
   DFS CAC time: 60000 ms
 * 5560 MHz [112] (27.0 dBm) (radar detection)
   DFS state: available (for 2032379 sec)
   DFS CAC time: 60000 ms
 * 5580 MHz [116] (27.0 dBm) (radar detection)
   DFS state: usable (for 2032442 sec)
   DFS CAC time: 60000 ms
 * 5600 MHz [120] (27.0 dBm) (radar detection)
   DFS state: usable (for 2032442 sec)
   DFS CAC time: 60000 ms
 * 5620 MHz [124] (27.0 dBm) (radar detection)
   DFS state: usable (for 2032442 sec)
   DFS CAC time: 60000 ms
 * 5640 MHz [128] (27.0 dBm) (radar detection)
   DFS state: usable (for 2032442 sec)
   DFS CAC time: 60000 ms
 * 5660 MHz [132] (27.0 dBm) (radar detection)
   DFS state: usable (for 2032442 sec)
   DFS CAC time: 60000 ms
 * 5680 MHz [136] (27.0 dBm) (radar detection)
   DFS state: usable (for 2032442 sec)
   DFS CAC time: 60000 ms
 * 5700 MHz [140] (27.0 dBm) (radar detection)
   DFS state: usable (for 2032442 sec)
   DFS CAC time: 60000 ms
 * 5720 MHz [144] (disabled)
 * 5745 MHz [149] (disabled)
 * 5765 MHz [153] (disabled)
 * 5785 MHz [157] (disabled)
 * 5805 MHz [161] (disabled)
 * 5825 MHz [165] (disabled)
 valid interface combinations:
 * #{ AP, mesh point } <= 8, #{ managed } <= 1,
   total <= 8, #channels <= 1, STA/AP BI must match, radar detect widths: { 20 MHz (no HT), 20 MHz, 40 MHz, 80 MHz }
 HT Capability overrides
 * MCS: ff ff ff ff ff ff ff ff ff ff
 * maximum A-MSDU length
 * supported channel width
 * short GI for 40 MHz
 * max A-MPDU length exponent
 * min MPDU start spacing
 Device supports VHT-IBSS.

Here are the ouputs of iw phy1 info:

# iw phy1 info
Wiphy phy1
 max # scan SSIDs: 4
 max scan IEs length: 2257 bytes
 max # sched scan SSIDs: 0
 max # match sets: 0
 Retry short limit: 7
 Retry long limit: 4
 Coverage class: 1 (up to 450m)
 Device supports AP-side u-APS.
 Device supports T-DLS.
 Available Antennas: TX 0x3 RX 0x3
 Configured Antennas: TX 0x3 RX 0x3
 Supported interface modes:
 * IBSS
 * managed
 * AP
 * AP/VLAN
 * WDS
 * monitor
 * mesh point
 * P2P-client
 * P2P-GO
 * outside context of a BSS
 Band 1:
 Capabilities: 0x11ee
 HT20/HT40
 SM Power Save disabled
 RX HT20 SGI
 RX HT40 SGI
 TX STBC
 RX STBC 1-stream
 Max AMSDU length: 3839 bytes
 DSSS/CCK HT40
 Maximum RX AMPDU length 65535 bytes (exponent: 0x003)
 Minimum RX AMPDU time spacing: 8 usec (0x06)
 HT TX/RX MCS rate indexes supported: 0-15
 Frequencies:
 * 2412 MHz [1] (20.0 dBm)
 * 2417 MHz [2] (20.0 dBm)
 * 2422 MHz [3] (20.0 dBm)
 * 2427 MHz [4] (20.0 dBm)
 * 2432 MHz [5] (20.0 dBm)
 * 2437 MHz [6] (20.0 dBm)
 * 2442 MHz [7] (20.0 dBm)
 * 2447 MHz [8] (20.0 dBm)
 * 2452 MHz [9] (20.0 dBm)
 * 2457 MHz [10] (20.0 dBm)
 * 2462 MHz [11] (20.0 dBm)
 * 2467 MHz [12] (20.0 dBm)
 * 2472 MHz [13] (20.0 dBm)
 * 2484 MHz [14] (disabled)
 valid interface combinations:
 * #{ managed } <= 2048, #{ AP, mesh point } <= 8, #{ P2P-client, P2P-GO } <= 1, #{ IBSS } <= 1,
   total <= 2048, #channels <= 1, STA/AP BI must match, radar detect widths: { 20 MHz (no HT), 20 MHz, 40 MHz }
 * #{ WDS } <= 2048,
   total <= 2048, #channels <= 1, STA/AP BI must match
 HT Capability overrides:
 * MCS: ff ff ff ff ff ff ff ff ff ff
 * maximum A-MSDU length
 * supported channel width
 * short GI for 40 MHz
 * max A-MPDU length exponent
 * min MPDU start spacing

 

Written by nicolas314

Monday 30 May 2016 at 4:41 pm

easy-rsa alternative

leave a comment »

Glad to announce that 2cca, the two-cent Certification Authority has now been ported to pure C with libcrypto (openssl) as single dependency. The goal was to make it available on openwrt as it seems pyopenssl is not available on this platform — without a lot of efforts.

As always, I swear this is the last time I ever link one of my sources against OpenSSL… until a replacement is made available.

Back to the point: you can now generate a Root CA, server, and client certificates to use with OpenVPN, with a couple of commands.

Download it from here:

https://github.com/nicolas314/2cca

Compile it with:

cc -o 2cca 2cca.c -lcrypto

Generate a root with e.g.:

2cca root O=Home CN=MyRootCA C=FR L=Paris email=postmaster@example.com

Your root is entirely defined by ca.crt and ca.key in the current directory. Its default duration is 10 years. Now that you have a root, you are going to use it to sign server and client certificates with e.g.:

2cca server CN=vpn.example.com C=FR L=Roubaix email=vpnmaster@example.com
2cca client CN=jdoe C=UK L=London email=jdoe@example.com duration=365

Your server identity is defined by vpn.example.com.crt and vpn.example.com.key. Your first client is jdoe.crt/jdoe.key.

You can verify certificates using openssl verify, e.g.:

openssl verify -CAfile ca.crt jdoe.crt

Certificate serial numbers are 128-bit long, which guarantees that they can be unique without having to memorize an incremental index. Your certificate database is the current directory.

Enjoy!

 

 

Written by nicolas314

Wednesday 30 December 2015 at 10:52 pm

Posted in openvpn, openwrt, pki, programming

Tagged with , ,

Easier easy-rsa

leave a comment »

openvpnIf you have ever set up an OpenVPN server, you probably had to fight your way through the certificate generation steps. Something like what is detailed here:

https://openvpn.net/index.php/open-source/documentation/miscellaneous/77-rsa-key-management.html

The official OpenVPN guide refers to easy-rsa, which is a royal pain in the butt. Even with the HOWTO in front of me, it takes me ages to set things up and if I ever have to come back later to generate more client certificates, I inevitably end up restarting from scratch because I cannot remember which steps I took and where I stored files.

Does not seem so difficult though. You need to generate a Root CA, and then use it to sign a server certificate (which is stored on your server) and client certificates which you distribute to your clients. I re-implemented the whole thing as a Python script in a couple of hours, tested it with an openvpn instance, and it works quite well. The script can be found here:

http://github.com/nicolas314/2cca

It is called two-cent CA because that is exactly what it is. There is no support for security modules like smart cards or HSMs because I do not need them, but since it is based on python-openssl it should not be too hard to make it work with P11 tokens.

Here is an example session where I create the root, a server identity, and two client identities for Alice and Bob.

$ python 2cca.py root
Give a name to your new root authority (default: Root CA)
Name: MyRoot
Which country is it located in? (default: ZZ)
Provide a 2-letter country code like US, FR, UK
Country: ZZ
Which city is it located in? (optional)
City: 
What organization is it part of? (default: Home)
Organization: Home
--- generating key pair (2048 bits)
Specify a certificate duration in days (default: 3650)
Duration: 
--- self-signing certificate
--- saving results to root.crt and root.key
done
$ python 2cca.py server
--- loading root certificate and key
Give a name to your new server (default: openvpn-server)
Name: myopenvpn-server
Which country is it located in? (default: ZZ)
Provide a 2-letter country code like US, FR, UK
Country: ZZ
Which city is it located in? (optional)
City: 
--- generating key pair (2048 bits)
Specify a certificate duration in days (default: 3650)
Duration: 
--- signing certificate with root
--- saving results to myopenvpn-server.crt and myopenvpn-server.key
$ python 2cca.py client
--- loading root certificate and key
Give a name to your new client (default: openvpn-client)
Name: Alice
Which country is it located in? (default: ZZ)
Provide a 2-letter country code like US, FR, UK
Country: UK
Which city is it located in? (optional)
City: Cambridge
--- generating key pair (2048 bits)
Specify a certificate duration in days (default: 3650)
Duration: 
--- signing certificate with root
--- saving results to Alice.crt and Alice.key
$ python 2cca.py client
--- loading root certificate and key
Give a name to your new client (default: openvpn-client)
Name: Bob
Which country is it located in? (default: ZZ)
Provide a 2-letter country code like US, FR, UK
Country: US
Which city is it located in? (optional)
City: Boston
--- generating key pair (2048 bits)
Specify a certificate duration in days (default: 3650)
Duration: 
--- signing certificate with root
--- saving results to Bob.crt and Bob.key
& ls
2cca.py    Alice.key  Bob.key    myopenvpn-server.crt  root.crt
Alice.crt  Bob.crt    README.md  myopenvpn-server.key  root.key

You want to keep root.crt for what OpenVPN calls the CA certificate. Do not loose root.key, you will need it whenever you will want to issue more client or server certificates. Install the other files as required.

Tested on Linux (Debian, Archlinux) and OSX.

Enjoy!

Written by nicolas314

Monday 28 December 2015 at 12:51 am

Empty Trash Fun Sounds

leave a comment »

You can change the default sound played when you empty the trash on OSX Yosemite by replacing:

/System/Library/Components/CoreAudio.component/Contents/SharedSupport/SystemSounds/finder/empty trash.aif

The new sound must be in AIFF format, which you can obtain e.g. using sox on Linux (also available from brew). Here is a link to the sound I currently use:

https://github.com/nicolas314/files/blob/master/burp.aiff?raw=true

Tons of fun for the whole family. Silly but I like it this way.

Written by nicolas314

Saturday 26 December 2015 at 8:46 pm

Posted in fun, osx

Tagged with

My next desktop: part 2

leave a comment »

own-mac-id

Friend of mine (thanks Ben!) introduced me to this site:

http://www.tonymacx86.com/

Everything you need to build your own Mac from bits and pieces can be found there. Tony maintains very detailed shopping lists for everything you need to build equivalents to Apple’s machines.

To err on the pedantic side, you are not really building your own Mac but rather choosing PC hardware that is suitable to run OSX, Apple’s operating system. To be fair to Apple, there is a lot more to a Mac than just OSX.  When you buy a Mac you get a ready-made machine built from hardware that has been tested to just work out of the box. The OS is pre-installed, your configuration is clearly identified and supported, and you benefit from a long-term warranty that has little equivalent in the PC world. This is especially true for today’s laptops that rarely go through a complete year without experiencing hardware defects. If you have ever brought back your MacBook to an Apple store you know exactly what I mean: the service is top-notch, you bought a lot more than just hardware. Enter the store with a broken machine and come out an hour later with a brand new MacBook containing all of your data (unless you fsck’ed up the hard drive, of course).

That said, if you are ready to spend some time maintaining your own machine alive and pay the costs associated with that, it really makes sense to build your own. The next Genius bar is in your living-room if you happen to be a Mac Genius yourself. Ready for the game?

I started from Tony’s shopping list for a Mac Mini. When you dig into it, you realize that some pieces of hardware are not available here in Europe. There are equivalents but you need to know which values to check and if there are chances of incompatibility. RAM for example comes in various flavours: voltage, size, speed, and standard. Some motherboards use dual channel RAM, in which case it is better to buy two RAM chips of equal size rather than a single big one. If you want to have an independent video card you also need to make sure your can power it enough otherwise the box will not even boot. And with great power consumption comes noise and heat to dissipate, for which you need an adequate box and ventilation.  That shit is not obvious to get right and Tony’s shopping lists only get you to a certain point, after which you need to start juggling between what you would like to achieve and what is available to purchase in your region. Some parts cannot be delivered overnight and the risk of picking an incompatible device is high, forcing a return and a few more days of waiting. Not a friendly game to play, is it? Ideally you would like to buy one part and be done with it.

The main points for me were: low consumption, small size, and silence.  These were the three reasons why I purchased a Mac Mini in 2007 and they are still true to this day.

Enter the ready-made mini-PCs: several vendors are now focusing on offering mini boxes that pack enough power into the smallest form factors, keeping heat and noise to minimal amounts. The Intel NUC product line first comes to mind, but there are other vendors now on the same market, like Zotac or Gigabyte. After a careful review of most common options, I chose to go with this one:

Gigabyte GB-BXI5h-4200

You can find Gigabyte boxes (called bricks) sporting i3, i5, i7, or Celeron processors. i3 seemed a bit weak and the i7 boxes are apparently extremely loud, so I opted for a Core i5 version for about 400 euros (Nov 2015). I scavenged an SSD hard drive from an older build I had and only had to add RAM chips on top of that to complete the box: two 8-GB chips from a noname vendor for 80 euros should do the trick.

Now off to installation!

Before I installed OSX, I wanted to give the box a test run with some live Linux flavours to see what it was worth. This led me to a first obstacle: the BIOS. Gigabyte provides a very simplified BIOS (text) interface with absolutely no documentation or online help. You are facing pages of obscure names that do not mean anything at all to the uninitiated, and good luck configuring it.

I admit having stayed away from the whole BIOS/EFI thing those past years and was completely left in limbo as to what I should do. The box could not boot a live Linux Mint USB stick, but I got Ubuntu to boot easily enough.  Seems that operating systems nowadays have to be signed to be allowed to run. I found some options to disable that in the BIOS but that did not get Linux Mint to boot. Oh well. The live version of Ubuntu is nice enough, recognizes all the hardware, and gave me a working desktop in less than a minute. Good to know in case I do not succeed in getting OSX to work.

Prepare to spend some time in the BIOS settings though, because nothing will boot until they are correctly aligned. All together, it took me maybe 2 hours to get things straight by trial-and-error. Not your user-friendly-est experience.

I picked the install procedure from here:

Install Yosemite on any Intel-based PC
RehabMan guide to installing on BXI5h

I chose to install Yosemite (OSX 10.10) and not El Capitan (OSX 10.11). The only brief experience I had with an early El Capitan on my previous Mac (mini) had a disastrous bug that left most fonts completely unlegible on my screen.  Better play it safe and stay one version behind, especially for unsupported hardware. I might update it later as there are many reports of people who are successfully running El Capitan on the same kind of box.

The two main points that caused issues were related to getting the damn thing to boot: the BIOS itself, and the bootloader.

BIOS first: I had to juggle with hundreds of undocumented options until I got them right. For posterity, here are some important settings that are working for me now:

BIOS product: MMLP5AP-00
Version: F6

Advanced:
    Intel Rapid Start Technology: [disabled]
    Network Stack: [disabled]

Chipset:
    Onboard audio: [enabled]
    Onboard LAN: [enabled]
    Erp support: [enabled]
    DRAM Frequency Control: [disabled]

Boot:
    Option 1: [UEFI BIOS on HDD1]
    CSM Parameters:
        Launch CSM [enabled]
        Boot filter [UEFI and legacy]
        Launch PXE OpROM policy [do not launch]
        Launch storage OpROM policy [legacy only]
        Other PCI device ROM priority [UEFI OpROM]

Remember to disable Secure Boot as the OS we will install is not signed. Or rather: its signature is obviously not recognized as an official PC OS.

I could not get Unibeast to boot this machine, so I ended up using Clover which works perfectly fine. RehabMan’s guide saved me there. A million thanks to him for publishing this!

Things I remember from these painful moments:

Installing OSX from USB is quite straightforward. Either you have enough device drivers running and it boots, or it crashes almost immediately. If you get past the OSX setup screens you are good to go. It took about 30 minutes to get OSX installed on the box. This is an SSD drive so disk speed is normally not an issue.

The first time OSX is booted you are hanging by a thread as the bootloader is not installed yet. Do not reboot the machine now or you will have to restart from scratch. You absolutely need to follow RehabMan’s procedure to the end to get all of your device drivers sorted out. Install the developer tools, run git, get the necessary files, modify some XML files manually, and run everything through very carefully. Once you have everything ready you can install Clover on the hard drive. I found it to be a pain to configure and did not dare be too adventurous in the options I chose. If it boots, it suits me.

One part you cannot escape is generate a fake ID for this Mac otherwise you will be locked out of all Apple stuff, including the Apple store. The Clover tools did all of that for me quite nicely. The “About” window shows it is identified as a MacBook Pro retina from 2013 with 16GB RAM, the processor being correctly identified as a 2.3GHz Intel Core i5 (see attached screenshot).

I never got WiFi or Bluetooth to work, even following RehabMan’s instructions step by step. Something is wrong is my configuration somewhere and I could not figure out what exactly. Not really an issue as I am not using radios on that machine. That said, I was curious and got it to work with a 5-euro external USB WiFi dongle from D-Link so it should not be too much of an issue if I ever need WiFi.

Once the bootloader is Ok, the bootloader prompts you for either normal or recovery boot. I never tried recovery, I just assume it works. Cold booting to a login window takes about 10 seconds.

So far everything has been working and the desktop is extremely stable.  There are sometimes issues with the audio system sometimes crashing and not recovering, but it seems to be related to a bug in mpg123, a command-line mp3 player I am sometimes using to preview mp3 files from a terminal. I just switched to using VLC for that kind of task and did not get sound crashes since then. If the sound system ever crashes again, a 10-second reboot fixes everything.

I applied every system update I received so far and did not get into trouble so it seems OSX is happy. Net gains:

  • New box is about half as big as my previous Mac mini
  • Completely silent, even under heavy load
  • Tremendously faster on all accounts! Operations that took minutes before are now measured in seconds. Converting ebooks, encoding movies, or converting flac to mp3 are now a breeze.
  • A lot more comfortable to live with as 16GB of RAM allow to have as many apps running as I want. It is still connected to a 1920×1080 HD screen so the RAM is mostly for apps. I expect things to go differently the day I hook it onto a 4k screen as video memory will be taken from the same 16GB.

The trip was not event-less but by all means, it was worthwhile.

Written by nicolas314

Saturday 26 December 2015 at 8:31 pm

Posted in fun, hardware, osx

Tagged with ,

OpenWRT on MR3020

with one comment

TL-MR3020_3_1600x1600

 

Situation: you live in a one-room appartment and receive Internet through a single RJ45 plug in the wall. How do you extend it to all devices in the room for the cheapest price?

Objective: provide WiFi connectivity on a dedicated LAN where users can see each other and share files easily. The system should be easy to repair or upgrade. Bonus points for extra functionalities like VPN provided to the whole WiFi LAN, ad filtering, or shared folders on Samba.

My first go-to for anything cheap related to computing is the RaspberryPi.  How can you beat it? For 35 euros (amazon.fr in December 2015) you get a full-powered Linux box with RJ45 and enough USB ports to power an external disk and a WiFi dongle. I gave it a try and came to the conclusion that the result would probably be a little too expensive and probably brittle. I have several WiFI USB adapters lying around and none of them was able to create a WiFi Access Point on a RPi, though they do work flawlessly as WiFi clients. They almost got there but not completely. You need to recompile your own WiFi access point software, and taking care of my own iptables rules is just beyond my patience.

Better option: Openwrt! This open-source Linux distribution is not meant for your average PC but to run on screen-less network appliances. You won’t find desktop apps or anything related to X11 but you have all possible network daemons and tools running there. Openwrt runs on low-power processors like ARM, MIPS, and x86 too of course. It comes bundled with its own package distribution tools (opkg) making administration relatively easy. It is really meant for tinkerers, people who like to open the box and modify what’s inside to do different stuff.

Obvious applications for Openwrt are of course network-related. Build your home router for 25 euros supporting IPv6, customized firewalling, guest WiFi, kid protection, quality of service, network monitoring, Virtual Private Networks, or file sharing, to name a few. A popular application is PirateBox: start the box, create a local WiFi network, all clients connected to it can easily share files through the local access point.  Other cool projects include running your own home telephony over the Internet (asterisk), making a sound box or an Internet radio.

There are also thousands of cool projects if you are so inclined and ready to take your soldering iron out of the dust. Most router hardware parts have hackable GPIO ports you can connect to a breadboard to pilot a 0-5V trigger or even read signals at a reasonable frequency. Check out the DIY section at the bottom of this page to get a few ideas:

https://wiki.openwrt.org/toh/tp-link/tl-wr703n

Note that the WR703N is a 10-20 euro piece of hardware so it is about half of the retail price of an Arduino with Ethernet shield, or a RaspberryPi.  Sure, you get less hardware, but if your project is purely network-oriented this is by far a better alternative.

Back to our task: find cheap hardware that gets Internet from an RJ45 plug and offers decent WiFi for a few devices.  I set my limit to 25 euros and chose the TP-Link MR3020.  It has the minimal 4MB of internal storage needed to install Openwrt. You read that right: 4 megabytes of persistent storage, in an era where your average smartphone sports several gigabytes of RAM.

There are several ways to obtain an Openwrt image for a given piece of hardware. First and simplest one is to download it directly off openwrt.org. That does not always fit the bill though, because the team had to make choices about the packages that are present in each image. If space is tight (4 megs is kind of narrow) you may not be able to install anything more than was pre-bundled already. That is actually the case for the MR3020. Solution: extend the local storage. So I plugged in an 8GB USB dongle (5 euros).

Plug the USB in, wait, and… nothing. ‘dmesg’ shows the kernel recognized something USB but did not offer to mount the filesystem. Of course: the kernel modules have been trimmed down to a minimum and USB storage was not in. Quick install with opkg? No way. There is not enough space left to install a mere kernel module for USB storage so the default image falls a bit flat. Removing packages also does not work. I first tried to trim down all un-needed packages until I realized that every time I uninstalled something, space was running even lower on the root filesystem. How comes?

To understand why, you need to know that the default firmware image is built on top of squashfs, a read-only compressed filesystem. On top of that, openwrt adds an overlay filesystem, an image that registers all changes from the underlying read-only partition. When you delete a file from such a setup you actually write on top of it in the overlay filesystem a mention that this file should not appear any more. There is another option using a read-write filesystem (jffs2) for the firmware but apparently it can get into trouble on many routers so I did not even try.

One solution is to build your own openwrt image and embed only the modules you need. That one is tough. You need to dedicate an x64 machine for that, download a million source files, prepare all the compilers and build tools you ever heard of, fine tune the package configuration, press make, and wait for a few hours. You end up with an image a bit smaller than the four dreaded megabytes with a build system taking about 3GB of space on the build host.

To be fair, the openwrt build system is a wonderful thing. It starts by creating its own toolchain for cross-compilation for your target hardware, then builds up a whole Unix from scratch and ends up packaging everything into this tiny image. Very impressive. I have seen very few professional build systems that are as clean as this one. Congratulations to the Openwrt for working out such a beautiful system!

If you are not into compiling your own stuff, you have another option:
Image Generator
https://wiki.openwrt.org/doc/howto/obtain.firmware.generate

This method uses pre-built components which you assemble into a minimal firmware image. That goes a lot faster (a matter of seconds) and does not eat up gigabytes of disk space. Select the few minimal modules you need to at least boot up the router on the network, and be able to mount USB storage. That is what I ended up doing.

I flashed an image that contained just enough software to run a kernel with network support (duh), USB storage support, package management, and some basic utilities. Plugging the USB storage worked immediately. Yay!

Let’s start again: I have 8GB of space on USB dongle, far more than I will ever need for the router itself. I created a first ext4 partition for 700 megs, which should be enough to hold every possible package, configuration files, and even log files if needed. I added a second 128 meg partition for swap space, and left the rest as a single ext4 partition.

The router was booted again with USB dongle plugged in. Let it boot, log onto it as root, and set up the root filesystem as an overlay from the USB first partition. Reboot the router and there you go: your router space has grown from 4 megs to 700.

This brilliant howto here explains exactly how to do it:
https://wiki.openwrt.org/doc/howto/extroot

Once you have enough space you can start adding back all the packages you need. Just to give you an idea, I have there:

  • A web interface to set everything up
  • Tools to monitor the device and display nice graphs on the web interface
  • DNS server, WiFi Access Point, Firewall
  • Development tools: gcc, git, python, lua, strace, screen, vim
  • A Samba server
  • sshfs to mount remote filesystems locally
  • OpenVPN either as client or server

After that it is a matter of taking time to configure each service specifically. The Openwrt wiki pages are a real treasure in this respect.  Everything you need is there, together with additional ideas for cool stuff to do. One hour later I was done adding VPN support, Samba service for the remaining of the USB dongle space, an ad blocker, a local web site, and a local fixed address for administration (an alias on eth0).

There is one point really worth mentioning about the Openwrt philosophy.  If you have some Unix experience, you know that every piece of server software you install comes with its own system of configuration directories and files with a specific new syntax to learn, a different place to find log files, and another way to handle PID files. Sure there is some standardization on the way with most config files under /etc and logs under /var/log, but it seems everybody needs to invent a new syntax for configuration and logging. See a previous blog post about that topic.

Openwrt places all configuration files in a single place and they all respect the same key/value syntax. It is straightforward to understand, edit, diff, store in version control, or pretty-print. That alone helps smooth out the learning curve every step of the way. Gives you a definite feeling of things done in a proper, consistent, and very clean manner. So there you go: for all your network-y things, you know where to look. The Internet of things starts right here with these boxes, guys.

My next endeavours with OpenWRT will probably need my soldering iron. Those GPIOs on the board look tempting…

Written by nicolas314

Wednesday 9 December 2015 at 12:31 am

Posted in hardware, openwrt, router

Tagged with , ,

Saving pennies

leave a comment »

Today I learned how to save a few pennies on your electricity bill: if you do not use the HDMI output on your Raspberry Pi you can disable it completely and shave off some mA in power consumption.

  • On Raspbian: sudo /usr/bin/tvservice -o
  • On Archlinux: sudo /opt/vc/bin/tvservice -o

The power LEDS can also be switched off on model 2B and Raspberry Zero. See:

http://www.midwesternmac.com/blogs/jeff-geerling/controlling-pwr-act-leds-raspberry-pi

Environmental thinking applied to minute details.

Written by nicolas314

Monday 30 November 2015 at 3:17 pm

Posted in raspberrypi

Tagged with ,

Ad Blocking Appliance

with one comment

BH_LMC

Finally found some use for this Raspberry Pi B (version 1) I had lying around taking dust: an ad blocker for the whole household! Yay!
The whole thing took me less than an hour to install and no time at all to setup. You download and burn a DietPi image to an SD card, insert it into your RPi, and boot. Log through ssh as root and follow the instructions. Let it download black lists from the net and you are ready to go.
All machines on my network are now set to use the RPi address as DNS. This works furiously well it is nothing short of amazing! Spotify without the ads, Android apps without annoying popups and ads. Things just work out of the box. I wish I had found this earlier.
Oh, and if you do not want to dedicate a Raspberry Pi for this, you can always install the package separately on a Debian box, but you need to leave it running 24×7.

The author explains how it works in a dedicated blog post, but to put it in a nutshell here is how it works:

  • Pi-hole provides a script to aggregate lists of known ad sites: gravity.sh. Run this every once in a while.
  • Pi-hole uses dnsmasq, a lightweight DNS server. The pi-hole instance is set to use the list from gravity.sh as a black list.
  • Pi-hole sets up a local web server answering all queries with a blank page. You also get a local (PHP) web site giving you statistics about number of blocked domains.

Best use I have ever had for this tiny box so far!

Links:

 

Written by nicolas314

Sunday 29 November 2015 at 11:52 pm

Posted in hardware, raspberrypi, router, seo

Tagged with , ,

My next desktop: part 1

leave a comment »

512px-Desktop_font_awesome.svg

My main desktop at home is a glorious Mac Mini from 2007. This poor machine has seen every possible hardware upgrade allowed by the motherboard and is starting to show its age now, with no support for screen resolutions beyond 1920×1080. There are good 4k monitors out there for about 400 euros and I expect prices to drop a bit soon under the 300 euro threshold, so if I ever want to have the retina experience at home it is time for an upgrade!

Heading straight to the Apple store site is a bit disheartening. A decent Mac Mini configuration would set me off by 1,200 euros and that is just not going to happen. The mini was a revolution when it was released: small form factor, silent, and powerful enough to run everything I needed. Almost a decade later there must be other contenders on the small is beautiful market so there is absolutely no reason a mini-PC should cost so much. What are the alternatives for a silent desktop running on a BIG screen?

27-inch iMac? Forget it. If I am paying 2,500 euros for a computer in 2015 I expect it to be delivered by Madonna dancing in a string, handing me off a glass of champagne. And I want a rainbow-farting poney with wings too.

Windows PC? Forget it. My recent experience with Windows 10 has not been stellar to say the least. My main grief is with the privacy options that keep popping up, reminding me of the Facebook or LinkedIn way of introducing disastrous privacy options that are ticked on by default. The fact that you have to log onto your home computer using a cloud account is just nuts. Leave me out of this please! That, and the fact that you have to dedicate 50% of your computing resources to software updates and anti-virus scans, or the sudden reboots while you are actively using the desktop because Windows decided you need this security update immediately.

Linux desktop? Why not. I had Linux desktops for 15+ years and loved them, but I admit having now quite an investment in OSX software that have no equivalent in the Linux world, unfortunately. Re-training myself with open-source equivalents (when they exist) would need a lot of time I could save by simply staying with OSX.

Is there a way I could run OSX without having to pay the hefty Apple tax? Certainly! It requires building your own machine from scratch but it can be done. These machines are most usually called Hackintosh or Customac. They require a great deal of attention to build and install, and a lot more maintenance than your average OS installation, but if your time is free you can seriously reduce your hardware bill.

With that in mind, I started looking around for simple and reasonable solutions and ended up with a tiny, silent, and remarkably powerful little box that runs OSX Yosemite perfectly well. The complete walkthrough will be described in my next post.

Written by nicolas314

Monday 9 November 2015 at 12:30 pm

Posted in Uncategorized

Meet the Guru

with 3 comments

math-formulae

Tonight I met a guru. Not an expert, a real guru: the kind of spiritual guide who has theories, speaks wonders, and has followers. This all started when a friend of mine invited me over to have a quick chat with this phenomenal genius from Canada who has invented methods that are about to bring a complete revolution to the way we deal with digital data. Data compression to insane ratios, database speedups, multi-dimensional fractal volumes held in a single SMS, you name it. My friend asked for advice about the guy’s theories because he wants to invest money and create a startup.

I was a bit dubious at first, but hey… curiosity took over. I absolutely had to know what the guy was about. We met around a few beers and I asked casually: “so what is it you are doing, exactly?”

Without hesitation, he said: “We have found methods to encode information like tight N-spaces in base 8 around bijective radical algorithms that inject coordinates inside an index that compresses the whole universe in a single integer that can be used to decompose irrational numbers like inverted matrices. You see? Using just one number, N-dimensional spaces can be represented along a bijective access ramp thanks to multi-variate polynomials.”

“Oh wow. Continue?”
He grinned, convinced he tossed enough big words in one sentence to lose me completely. The two disciples around him gently nodded as he went along. He stared right into my eyes as he spoke, never looking for his words, not showing an instant of hesitation.

“According to my method, we re-arrange coefficients in matrices in the most efficient way, heading straight to the core of multi-dimentional data spread over a lattice of chosen binomial functions, up to the power 1,000 or more. These functions are space-filling, they can visit every point in any volume exactly once.”

A pause. I asked: “Interesting. So what are the applications?”

He jumped on his seat: “The whole digital world can now be expressed in real-time!” Still calm, I asked: “give me an example?”

“When you need to identify that a point belongs to a region in a multi-dimensional space, you only need to compute its index along a multi-variate polynomial algorithm that expands the number irregularity into a local singularity to spread out a potential zero.”

Both disciples were still nodding. I was starting to feel uneasy. Among the waves of technical, unrelated words, I could feel he was trying to get somewhere. He was trying to encode data into another representation that has magic powers, but the mysterious polynomial functions were hard to grasp, and the application of this encoding still unclear. I ventured:

“How does this compare to a Fourier transform, for example?”

That one caught him off-balance. Apparently he did not master the vocabulary associated to that field yet, so he just waved the idea away.

“Fourier is bullshit. I am talking about Gödelization of the digital world. You know Gödel, right? He was a genius who encoded the world into numbers!”

“Yes he was, but he only did that to prove a point about reflectivity, not as an efficient encoding. Gödel numbers are only a tool to prove his point. Never heard of the term Gödelisation though. Did you invent it?”

“Nonsense!” he yelled. “Gödelization is helping us create an index that gives you the whole information about anything you want in just one single number!”

“Oh right. So you compress information then?”

“No, no, no”. He looks worried now. “Nothing about compression.”

“But you said earlier…”

“Don’t interrupt me. This is not about compression, this is about efficient algorithms to manage N-dimensional multivariate volumes in NP-space by running in Newtonian arithmetic series as they expand into infinity.”

That went on for another few minutes. As requested, I did not interrupt.  When he seemed finished tossing more words at me, I asked again:

“So what is it you do, exactly?”

At that point he got the message that he was not getting me. His two disciples started moving uncomfortably on their chairs. He talked a bit louder, nailing me with his eyes as he declared his principles like bible verses, dumping more pseudo-scientific sentences into the pot like water in a well.

At some point he said:
“I am working with the greatest worldwide specialists on that topic!  Scientists at the Paris Observatory are using my computations for astronomy!”

There we go. Argument by authority: there are people more intelligent than you who believe me, so you have to believe me.

“That right? As it happens I worked with the Paris observatories for a decade. Who do you work with? Is that in Paris 14 or Meudon?”

Blank stare. Ouch, he did not see that one coming. “I work with…  Jean-Pierre uh…  Letruc.”

“Letruc?”

“Yes, he is a world famous scientist who computes singularities.”

I almost picked up my smartphone to look the name up but common sense prevailed. I think he would have slapped me for doubting his word.

“Never heard about Letruc but who cares. What do they use it for, in astronomy?”

“Well, you know, don’t you? Computing galaxies and stars and stuff.”

“Computing what?”

He sighed, as if it was obvious. “Computing the movements of multi-dimensional Euclidian forms in the space-time continuum.” Duh.

Another pause for a few seconds. That last one was worth gold. And then he started again for the next ten minutes, bashing every possible kind of notion at me as if hitting me on the head with four-syllable words was meant to convince me. He dumped everything he had: information theory and Shannon’s theorem, Euclides spaces, prime numbers, Newton laws, Fibonacci rules, and Gödel’s theorem, again.

I said: “I am really dumb so explain me again: what can I use your…  algorithms for?”

He jumped on that one: “Imagine you have to sort out character strings in lexicographic order. So A, then B, then C, etc. If you take a whole dictionary it might take years to achieve! There are no algorithms to achieve that efficiently.”

“Beg your pardon, there are quite a few. Lesson 1 in any computer science course. I think you and I read the same books.”

“Don’t interrupt me! I have read 25,000 books and I have them all at home!  You academics think you know everything right? You need to think differently! My mathematics have no relationship whatsoever with anything you might have learned and you are too limited to understand the implication of what I am telling you!”

“Sure. Sorry. Please explain again?”

“So you take the strings in input and you choose a multi-variate polynomial function of rank N that maps digital inputs into a real-time computation to yield a single integer number. Then you sort the list of numbers and you are done. In real-time. And even Mr Oracle out there with his powerful databases cannot do anything about that. Hah!”

Ah… Now we were getting somewhere. You transform a list of character strings into numbers to sort them.
I asked: “This… Gödelization, to use your term, has a cost, no?”

“No! It is instantaneous! Real-time! The greatest scientist with super-computers can never find a better transformation because the polynomials are…”

“Ok. It has no cost. And then you are sorting numbers.”

“Don’t interrupt me! You need to listen! You guys never listen, but you might learn something”. He was now literally barking at me so I asked him to calm down several times, without result.

He started again: “I am working with the greatest minds on this planet on this topic! The guys at INRIA admire my work!”

“So you work with INRIA? You know I did, too? Which office? Nice? Paris?”

That one hit him like a rock. He stuttered: “I… I… I… worked in Paris and gave a talk in Sophia Antipolis a few months ago. But you could not understand anything I said there, you would have to read a very long paper I wrote, 120 pages, which explains everything.”

“Please share the link, I am impatient to read it.”

“You would not understand anyway! People oppose me on the principles that I am trying to do things differently but they are WRONG! They are all WRONG!”

He got so infuriated at that point that one of his disciples left the room while the other decided to take over and explain to me the principles of integer sorting, as if he was talking to a five year old. I tried to interact but was systematically asked to shut up and listen to an endless stream of made-up scientific words, picking notions from mathematics, physics, electronics. The guy stressed every one of his points by quoting famous scientists and laws that made absolutely no sense, like Newton’s law of primes, the well-known Euclides and Peano numbers, or the famous Fibonacci theorem.

“So you take data and transform it into numbers?”

“NOOO!!! YOU DON’T GET IT, DO YOU??”

“Apparently I don’t. But you just said that you map strings into numbers.”

“NO! You don’t understand anything. Listen to me…” And then five more minutes of vocal diarrhea meant to shut me up.

After a while I asked:
“Is there any point in me asking questions? Or do you just want to listen to yourself talking?”

That was pure provocation, I admit, but getting yelled at and being told I was too stupid to understand his genius did not help much in making me comfortable. I left the room as he was erupting in anger. His disciple yelled at me while I was going out: “Ha! You do not even understand the concept of bijection, you dumb ass!”

I could not have taken a single more second of that bullshit.  Took me some time to calm down and then I tried to reflect on what I had just witnessed. What is this guy trying to achieve exactly? What is his agenda?

I saw a guy who has mastered a talent to throw scientific-looking words and concepts into sentences, around an almost-believable story, ending up with promises of being rich by selling never-done-before powers over digital data. The fact that he has disciples and tries to recruit more is extremely interesting, though I still cannot figure out what he is gaining there beyond recognition by a few gullible souls. With such a character, latent paranoia is expected to be found: people do not understand him, they would not listen, he is fighting against the establishment, and most probably: other scientists and engineers are conspiring against him because they know he has a truth everybody was dreaming of and nobody could find or understand.

Interesting character. In my life I met quite a few other gurus: a medical shaman who is the only person in the world who knows about the true powers of plants, an artist turned into a religious leader of his own sect who teaches philosophy, psychoanalysis, and black magic in the same session.  There are also the conspiracy guys who know every secret about the NSA, the FBI, the CIA, why Americans never walked on the Moon or destroyed the Twin Towers themselves. These guys scare me to no end. After spending five minutes with them you realize they do not see you or anybody around them for that matter. Other people are just witnesses of the fact that they have some untold truth only known to them, which everybody wants, and nobody would ever believe.

The good thing with having so many voices in your head is that you never feel alone.

Relevant XKCD: https://xkcd.com/451/

Written by nicolas314

Tuesday 7 July 2015 at 11:50 pm

Posted in Uncategorized

XML just called

leave a comment »

mean-science

Lunch time. I had been waiting outside for my colleagues to show up for close to 10 minutes when they said they were following me. Getting impatient, I wrote an SMS to my office mate containing the single word: MANGER. Interestingly enough, Android does not allow me to set a default phone number for a contact, and the SMS was sent to a land line instead of my colleague’s mobile.

The guys showed up, we had lunch, and came back to the office. My colleague saw the red light blinking on his land line and pressed the loudspeaker button without much thinking. We were served a bit of classical music, and then a synthetic voice declared without any emotion: “You have received a message. Reading: OPEN BRACKET EMPHASIS CLOSE BRACKET MANGER OPEN BRACKET SLASH EMPHASIS CLOSE BRACKET.”

Best laugh we’ve had in a while.

My next challenge: get the phone system to read me a whole stack trace.

Written by nicolas314

Tuesday 27 January 2015 at 11:59 pm

Posted in fun

Tagged with , ,

Je suis Charlie

leave a comment »

The Paris events were dreadful. Being assassinated for drawing cartoons is probably the most stupid thing that could ever happen. I am particularly touched by the death of Cabu, a cartoonist who, to me, has been part of the French culture forever. Whenever something of importance happened worldwide, Cabu was always there to draw it with a short, to-the-point cartoon that would highlight absurdity in the smartest and most hilarious way. When I was a kid Cabu was drawing stuff live in kid TV shows. It really feels like losing a soul brother who got killed for drawing smart stuff. Charlie Hebdo is purely about provocation and pushing freedom of speech to its limits, and Cabu was a genius for defusing tense situations by making everybody laugh. His cartoons went beyond just laughter, he really made you think twice about the topics at hand.

The aftermath was also dreadful. The manhunt, more killings, the hostage situation inside Paris. Every piece of news was worse than the previous one, until the police put an end to it.
Anyway, I did not go to the march. I would have wanted to but taking my kids to such a crowd would have been difficult to say the least. There are other ways to participate, like having a 2-hour conversation with my boys about what happened, why freedom of speech is capital to preserve our way of life, who these cartoonists were, and how some very stupid people can be manipulated in creating chaos, bringing the worst to themselves in the doing. When you read the third guy’s conversations with his hostages and hear what he left in the video that was published after his death, you realize that you are not facing an extremist but somebody who probably has trouble making a complete intelligible sentence, let alone have a coherent train of thought. There will always be simpletons like him, but those who received military training should probably be monitored to avoid such things from ever happening again.
I am not scared. I believe nobody is scared in France of terrorist attacks. The most global reaction is sadness, and people appalled by such immense stupidity.
If I had to keep anything positive from these events, it is the unanimous worldwide reaction of horror that shook the whole world. Seeing all these marches around the planet made me feel all fuzzy inside. The world is a better place today than when I was born, and this is becoming visible. Feels like the whole planet has grown up. There are still some dark places left, showing we need more education than weapons. Hopefully that will happen within our life time.

Written by nicolas314

Wednesday 14 January 2015 at 12:44 am

Posted in Uncategorized

Tagged with

Parental control with OpenWRT and OpenDNS

with 9 comments

images

The following recipe took me a whole evening to find, so I am documenting it here in hope it could be useful to somebody else.

I recently upgraded my home network to a beefier TP-Link C5 Archer (75 euros on Amazon). This little box packs two Wi-Fi access points in 2.4 and 5GHz (Wi-Fi ac), which pushes wireless speeds up to 500Mbit/s within a few meters range. The main selling point for me was that it runs the latest OpenWRT firmware with absolutely no issue whatsoever. Flash firmware, done.

OpenWRT has become a real Linux distribution today, packing more power than you could ever imagine achieving with such hardware. I certainly miss the Tomato user-friendly GUI, but I do enjoy the power at my fingertips when it comes to network configuration. Kudos to the OpenWRT team for such a technical achievement!

Back to the point: parental control. I have kids at home and all sorts of networked devices: smartphones, tablets, computers, servers, printers, you name it. I want to be able to disable adult site browsing and the like from kids hardware. The easiest solution I found so far is OpenDNS (http://www.opendns.com), which offers you free DNS filtering for one IP address. Create an account, configure your home IP address, set the categories you want to ban, and done. Any machine on my internal network using OpenDNS will receive re-directs for unwanted sites. In the past I used to manually modify the DNS settings on all kids hardware to switch to OpenDNS servers, but that quickly becomes old, and sometimes requires some sleight-of-hand to configure. Forget it.

Enter OpenWRT: you can actually assign different DHCP settings to hosts on your network, e.g. different DNS servers. Even if the documentation is respectfully thick on that topic, it took me a while to understand it.

In its latest incarnation Barrier Breaker (Dec 2014), OpenWRT packs all DHCP information into /etc/config/dhcp. Make your modifications there and restart the dnsmasq daemon to activate them.

Procedure:

1. edit /etc/config/dhcp to add a new section

config tag 'kids'
    list dhcp_option '6,208.67.222.222,208.67.220.220'

2. Now add individual sections for all devices you want to include in the ‘kids’ section:

config host
    option name 'pluto'
    option mac 'YOUR DEVICE MAC ADDRESS'
    option ip 'YOUR DEVICE ADDRESS ON THE INTERNAL NETWORK'
    option tag 'kids'

3. Restart dnsmasq with: /etc/init.d/dnsmasq restart

And you are done. Just tag the hosts you want to be part of the kids zone to distribute the OpenDNS servers instead of the default one.

References:

Written by nicolas314

Wednesday 10 December 2014 at 10:24 pm

Posted in Uncategorized

Tagged with , ,

PC-to-PC 101

leave a comment »

Ethernet RJ45 connector

Got my hands on an old but faithful laptop recently. The thing is a lot faster than my son’s desktop, so I thought it would make a nice update for him. First thing I did was upgrade the laptop’s disk to an SSD and install Linux Mint. This is now the fastest computer in the house, with a total Linux boot time averaging around 3-4 seconds from power button press to full windowed environment. Yay!

Next task: transfer his home directory from his desktop to his new laptop. Both machines are currently connected to the home network through Wi-Fi, and we are talking about transferring about 100 Gbytes. Even with a beefy router averaging around 1Mbyte/sec, we are talking about 24+ hours of transfer. Not ideal. One solution would be to connect an external hard drive, copy all contents, then restore on the other machine, but I thought there must be a better solution. Both machines have RJ-45 Gbit connectors, so why not take advantage of this?

Now comes the problem: how do you create an ad-hoc network between two Linux machines to allow them to exchange data through a single Ethernet cable? Bonus points if you do not use a router.

First thing that came to my mind was: install all necessary software to transform one of the PCs into a simple router. A simple DHCP server and very basic routes should be enough. But no, that is not even necessary: switch both PCs to fixed IP addresses like 10.0.0.1 and 10.0.0.2 and it should just work. Which I tried, without much success. A ping would work for a few seconds but both PCs lost their network connection almost immediately. Could not figure out why. I tried GUI configuration, modifying /etc/network/interfaces, manually switching with ip add, but nothing worked. The first packets would work fine, I could even connect with ssh, and after 10 seconds connectivity collapsed on both sides. We have been living with DHCP-configured networks for so long now that the default Linux networking tools are all assuming some kind of dynamic addressing scheme handled by a proper router. No luck.

I must have launched ifconfig half a million times and never got a proper answer about why this damn thing would disconnect itself after 30 seconds. And then suddenly I realized I had the solution to my problem just in front of me:

% ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
inet6 fe80::210:99ff:fe26:6db1  prefixlen 64  scopeid 0x20<link>

The answer is on the bottom line: the IPv6 link-local address, of course!

IPv6 local links are designed precisely to work without a router or DHCP server. Plug a cable, talk. No need for any kind of configuration, gateway, or network mask. Let’s try:

% ping6 fe80::210:99ff:fe26:6db1%eth0
PING fe80::210:99ff:fe26:6db1%eth0(fe80::210:75ff:fe26:6db1) 56 data bytes
64 bytes from fe80::210:99ff:fe26:6db1: icmp_seq=1 ttl=64 time=0.093 ms
64 bytes from fe80::210:99ff:fe26:6db1: icmp_seq=2 ttl=64 time=0.073 ms

NB: If you use link-local IPv6 addresses, you need to suffix the address by the NIC name (%eth0). Once that was cleared, I could start an ssh session from one box to the other without more thoughts. No software installation or configuration of any kind are required.

The remaining bit was to find out how to use IPv6 addresses with scp. A bit of googling revealed that scp needs IPv6 addresses to be surrounded by (escaped) brackets. Like this:

scp -6 nicolas@\[fe80::210:99ff:fe26:6db1%eth0\]:/home/music .

With both machines running on SSD drives and on Gbit NICs, the whole 100 Gbytes took just a few minutes to transfer.
Today I learned IPv6 can be useful immediately.

Written by nicolas314

Wednesday 15 October 2014 at 12:05 am

Posted in fun, linux, networking

Tagged with , ,

You do not need to travel

leave a comment »

Piles-of-paperwork-002

Both my passport and national ID are expiring this month, so I did what I had to do and took an appointment with the local Mairie to get them renewed. Appointment taken for 8:30 this morning.

If you have ever dealt with French administrations, you know the drill: you stand in line for several hours and when your turn finally comes, you discover that you needed one more paper, so you go back home and start again. Count 2-3 times minimum for standard procedures, 4-5 times on average. I love nothing more than this blessed moment when the fonctionnaire in front of you realizes that you finally have all the requested papers and they will not be able to say: “Sorry, I can’t process your request”, because they have to do actual work. In general they sigh heavily and take all their time to do their job.

I was prepared this morning, having every possible official document they might request and several copies of each. I carefully followed the instructions on the official web site, printed the forms, filled up whatever I could, I signed everything, and made more copies.

8:29 I was standing in the town hall.

The front desk clerk was a middle-aged lady who was sitting at her desk, blankly staring at nothing. I asked:

– Good morning. Is it open yet?
– No sir. We open at 8:30
She resumed her staring, in an impressive state of nothingness.
I checked the clock behind her, staring for 30 long seconds, then asked again:
– Good morning. Is it open yet?
– No… Er… Wait
She absent-mindedly clicked her mouse to open her session and sighed:
– What do you need?
– I have an appointment to renew my passport.
– Ok, here are some forms to fill. You can sit over there and…
– I have printed those forms myself and filled everything, so what next?
– Here is your ticket. Go wait over there.

I respectfully sat and started counting my documents again, when a bell called me to desk number 8.

The lady at desk number 8 looked a bit stressed. She tried to speak as fast as possible but only managed to mumble and strived to repeat everything she said.

– No need to stress, Madam, this is just a standard passport renewal procedure.
– Yes but you need to know that everything we do is strictly timed from now
on to improve our productivity. I need to do things as fast as possible.
– Sure. Great! How can I help?
– Show me all your papers. You forgot your birth certificate, right?
– No, actually here it is.
– Certainly you forgot to make copies?
– Nope. Here they are.
– You most probably forgot the tax stamps, didn’t you?
– Nope, there: 86 euros.

This dance went on for several minutes. She kept asking me to confirm that I forgot something while I kept handing papers over. I could see her face decomposing as she was faced with the fact that she would have to perform the complete gig.

After five frantic minutes of typing and scanning papers, she declared the round over.

– There you go. Your passport will be ready in three weeks.
– Ok. Now on to the national ID card renewal.
– Wait… What? You can only renew your passport!
– Well actually no. My ID card will expire next month. And I have all the papers, see? Same ones for passport.
– But… but… but that is impossible! You only took an appointment for passport renewal, this is going to kill my average!
– Ok… Let’s take another appointment, then.
– No I cannot do that! Anyway I cannot process a national ID card since your forms are filled on paper.
– … and?
– They need to be printed on cardboard!
– How the hell would I know that? Is that stated anywhere on your web site?
– No, the cardboard forms can only be obtained here.
– er… great, so where do I get them?
– That would be with my colleague but she is not there today. Goodbye.

Not answering that, I went to see the front desk lady before she feel asleep and asked her for cardboard forms. She immediately gave me one. Back to Lady number 8.

– Are those the right forms?
– (sigh) Yes…

I filled up the forms and handed them to her.

– You forgot to provide copies of your passport and national ID card!
– Nope, see: there are right there. The same ones you used for the passport renewal.
– Sorry, but the same copies cannot be used for both. Goodbye.
– Lady, you just scanned the copies and handed them back to me, why can’t you use them for another request?
– Because I need new copies. Goodbye.

Not answering that, I stood up and when to the copy machine next to her, flipped a few coins in, copied both passport and national ID card again, and went back to Lady number 8.

– There. Anything else missing?
– (mumbling)

Lady number 8 started reading my forms, then raised her head with a large smile and declared: “This is wrong! You see, you were born in France and you declared that…”. I interrupted her: “Lady, I was not born in France.”

She looked a bit puzzled, scanned my form again, then silently proceeded to stamp all the papers. After a while, she just stopped and said, victorious:

– You cannot renew your ID card! It will not expire until 2019!
– Then why is it written “Expires 01 Nov 2014” at the back?
– New law! New law! Since the 1st of Jan this year, all ID cards have been extended 5 more years. You do not need to renew. Goodbye.
– Wait… What happens when I want to cross a border and I can’t because my official papers are expired?
– You won’t have the issue because the cards are valid five more years.
– Do all borders know around Europe?
– In the Schengen area, they all do.
– What about the UK and Switzerland?
– None of my business. They just need to do their jobs.

She rummaged through her computer and called up an official web page from the ministry of foreign affairs, effectively declaring that old national ID cards are now valid five more years. The same site sports a large paragraph saying: “However, it has come to our understanding that some nationals have run into trouble when trying to cross borders with papers that officially expired. If you are planning on travelling, it is recommended to use your passport instead.”

– See? See? You do not need to renew your ID card!
– So… I would need to use my passport then?
– Yes! Goodbye!
– Are we talking about the passport you just took away from me?

At that point she refused to answer and called her superior, a small and large lady with a stern face who decided to tell me NO before she even knew what it was about. Lady number 8 explained: “This person wants to renew a national ID card and does not want to believe me when I say it is still valid 5 more years!”. Stern lady paused for a moment, then looked at me and said:

“You do not need a national ID card.”
“Why?” I asked.
“You do not need to travel, Sir.”

This is when I knew we had exited our normal world to fall into some stern lady’s conception of reality.

Completely overpowered by her last argument, I took back all the papers left on Lady number 8’s desk, tossed them in my bag, and left the building. I can cope with some level of stupidity but this get far beyond my reach. From now on I will have to make do without a national ID card.

Written by nicolas314

Monday 29 September 2014 at 3:01 pm

No news today

leave a comment »

Sarkozy was president of France during five years from 2007 to 2012. During those five long years, the French media switched to full-Sarkozy mode, as if he was the only newsworthy person on this planet. Most newspapers would not hesitate to put his picture on the front page on a daily or weekly basis, news web sites were just ablaze with news reports about what he did, what he had done, what he might have done, what people thought about what he had done or should do, together with editorials, discussions, and debates, all centered around just one single person.

Listening to the radio soon became a chore. The journalists were so overwhelmed with this character that everything had to be brought back to Sarkozy sooner or later. At first it was fun: counting the average number of times his name was mentioned on air brought an average frequency of one every 10 seconds or so. It quickly became old, though. After a few weeks I decided to switch off the radio as soon as his name came up. Survival air time went from one minute to: switch on radio, hear “Sarkozy”, switch off radio. He literally neutralized journalism in the country for five years.

Since about that time, I stopped listening to French radios and switched over completely to two other news sources: BBC Radio 4, and Bayern 2. Both have comprehensive programs about all kinds of topics and news about the rest of the world and not just one single person.

The revelation came to me in April 2013 after reading this article:

http://www.theguardian.com/media/2013/apr/12/news-is-bad-rolf-dobelli

I realized that most news reports are centered on topics I, at best, only vaguely care about. There is nothing I can do about a plane crash, economy fluctuations, or yet another G20 meeting. I am not trying to reduce the importance of these events, just stressing that there are a lot more pressing topics I need to know about and they are not reported in the news.

Since about a year I stopped reading papers or listening to the radio. No TV at home for the past 20+ years, and I can finally declare myself free of these dreadful news reports every morning. I get today’s weather by googling on a tablet and can now dedicate my time to reading sources I really care about.

Every now and then, I give it another try: switching on Radio 4 and Bayern 2 is still a joy, you just have to avoid full hours to dodge news reports. I gave up buying newspapers and removed all bookmarks to news web sites from my browsers. I still get some news from friends and colleagues and occasionally spend some time on a news topic of interest, but overall the net result is quite good. No more stress-inducing news about dreadful things out of my control, no more Sarkozyfication, no more reports of the daily death toll.

Net results: less sources of negativity in my life, less reasons to stress. I have so far never encountered a situation where I would think: “Gee, I wish I had known about that before, I should have listened to the news more carefully.”

No news is good news.

 

Written by nicolas314

Tuesday 29 July 2014 at 1:23 pm

Posted in Uncategorized

Like an orange

leave a comment »

orange

A few years back, I used to have a mobile subscription with Orange France. At some point, a bright polytechnicien at Orange must have figured out that it would cost them a hell of a lot less to send electronic invoices rather than the old-style paper-based version you sometimes never receive because the postal service in France can be kinda shitty. I started getting letters from Orange in the mail telling me how good this would be for the environment. We are talking about a full 10-page colour document covering in full details how much this helps save trees, the Amazon basin, kitties, etc. without a single hint of irony. They may even have included an authorization you had to fill in manually, sign, and reply by post. Waste paper to save paper. This made me chuckle, and then I forgot.

A couple of days later, I got the same 10-page letter, and then another one, and then I started piling them up in my living-room, wondering how much more paper they would waste to tell me how good it would be to help them save the planet. The irony was rather sharp: a mobile operator who decides to use hand-delivered paper documents to communicate with their customers, about reducing paper usage. It must have occurred to another bright buddy at Orange that this was a bit unefficient, so they reduced their paper-sending rate to about once a month (got it with the paper invoice) and started bombarding me with SMS and phone calls from customer service: “wouldn’t be cool to save the planet? Aren’t you a bit concerned?” At this point I went into rebellion mode and decided to stick to dead-tree invoices, just to see how far they would be ready to go.

I seriously doubt that the Orange executives give a damn about the planet, but they surely know what cost reduction means. They could have been honest and said something like: “Switching to electronic invoices saves us N euros/year/customer, which we are ready to share with you 50-50.” Or: “the savings will be used for an environmental cause” (like: be honest, do it), or even: “thanks to these savings, we will maintain X jobs.” Anything but not the guilt scheme they tried to use against me. And then the story stopped when I left Orange for another telco.

A couple of years later, Orange was forced to drop their prices thanks to Free Telecom. They created a new brand: Sosh, in charge of selling the very same contracts but with Free-like prices. From a marketing point of view they had to reduce the perceived services in order to justify the lesser prices: your monthly bill goes down from 45 to 19, but customer service is now only through a web forum, your customer web interface is reduced to a strict minimum, etc.

To me these were all great features! The standard Orange customer interface was a nightmare to navigate anyway, whereas the Sosh interface only shows information I am actually interested in. Getting customer service through a forum is a lot more helpful: plenty of non-Orange people reporting problems and solutions, helping you out often with a simple Google search. Cheaper, and better service. Yay!

Among the reduced quality of service was a switch from electronic to paper-based invoices. Wait.. what?

The words on their web site were pretty clear: since you are enjoying a cheap subscription, you cannot expect too much; getting electronic bills is an advantage strictly reserved to pure Orange subscribers.

That made me chuckle, and then I forgot.
Until I started receiving paper invoices again. Shit. Filling up a drawer again.

It has been two years now, and today I received a fairly stern letter from them:

“It has been decided that in order to help us fulfill our environmental objectives, all of your invoices will henceforth be electronic.”

and then a couple of paragraphs down:

“Should you decide to remain stuck in the past, you can opt back for paper-based invoices by selecting the option in your customer web interface.”

Well, at least this time they are not having me fill paper forms for an authorization to save paper.
One, two, three turnarounds. Can’t expect such a big company to be consistent like a person would. Poor guys.

Written by nicolas314

Tuesday 15 July 2014 at 7:46 pm

Posted in Uncategorized

Planes taking off

leave a comment »

Planes taking off

Planes taking off in Hannover. Nothing new, but I like this picture.
See also http://twistedsifter.com/2012/02/picture-of-the-day-striking-multiple-exposure-shot-of-takeoffs-at-hannover-aiport/

Written by nicolas314

Thursday 20 March 2014 at 1:14 am

Posted in Uncategorized

Tagged with , ,

Music in 2014

leave a comment »

Got to meet old-time friends this Christmas and I was amazed to discover that many of them are still die-hard fans buying all of their music on audio CDs. Guys: we are not living in 2014 and you are still buying physical objects to listen to music? Say again?

I must have given up on audio CDs about 15 years ago when mp3s starting flowing around. It first started with dedicated web sites distributing sound files (aiff or au format first, then mp3). The thing snowballed very quickly and then we had Napster and Kazaa to download all of our stuff through our glorious 33k US Robotics modems.

Truth be told, Internet sharing was not the biggest source. I owned about 300 audio CDs at that time, and most of my friends had between one hundred and one thousand music CDs at home. One day we started encoding all of them (using CDex) and circulated hard drives fully loaded with tons of music.  After a few months we had all gathered more music than we possibly could listen to in our awake moments for the next decades. Internet came as an extra source for very recent albums or stuff you could not find in brick-and-mortar stores: bootlegs, one-of-a-kind albums, and little-known artists.

I purchased some of the first portable MP3 players in 1998 and hooked one to my stereo. That was probably the last year I actually inserted a physical disc into a CD reader.

Is that deviant? I do not think so. Let me take an example based on reasonable assumptions:

– Apple’s iPod Classic offers 160GB of storage
– A song is 3-min long on average
– Albums contain 10 songs on average
– Songs encoded in 128k take up 1MB/min on average

Hence, an iPod Classic contains about 5,000 albums.

Assuming albums are priced 10 euros on average, this represents 50,000 euros worth of music on a device currently priced around 200 euros.  I cannot figure somebody who stores 5,000 CDs at home and would be willing to encode them one by one, or somebody who would be willing to spend 50k on a music collection. Seriously: has anyone ever filled up an iPod with only legally-acquired music? How many iPod Classic users have actually spent so much time or money on their content? If there ever was a business model based on the assumption that people would pay for the content they listened to, it is obviously unaware of those very basic facts.

Music is not a luxury or a commodity. It is part of our human culture and I would go as far as saying it is part of our daily needs. You can survive without music, like you could survive without speaking or bathing, but it is not going to be fun. The only government I ever heard about that decided to prohibit music were the Talibans between the Russian and the American occupations, and they did not end too well.

You can certainly control, tax, and rule the distribution of physical objects like audio CDs and stereos, but you cannot possibly have any effect on people singing in their showers, friends having a gig, or people who just want to dance to something else but deep silence. Music is a form of language, it is meant to be expressed and shared in order to be alive.

CDs are a convenient way to distribute and share music among humans, not the only possible one. We now all have high-bandwidth Internet connections from home and mobile devices, but we can also share huge collections of music face-to-face by just carrying a 500GB hard drive around. Since music wants to be shared, anything that goes into this direction is naturally favoured. You cannot prevent humans from sharing a form of language they take pleasure in hearing, no more than you can prevent them from telling stories or showing pictures of beautiful places they visited.

My kids have never bought a single CD or even inserted one in a CD player. When they want to listen to music they turn on their iPods and choose an album.  Each of their mini iPods contains between twenty and a hundred times more music than was available to me as a teenager in the 80s. They can try everything, build up their tastes, dance, sing, and experience the whole world of music for the price of a single device. Compare that to the tapes and vinyls we carried around thirty years ago: we were stuck into the same few artists and rarely experienced new stuff. If we did, it was through low-quality pirated tapes and few of us could afford spending money to purchase everything. We just shared.

Having literally thousands of albums on a hard drive is not a solution though. If you want to be able to play them on any MP3 player, you often need to transcode songs, sort them out, find the album covers and re-tag all the songs correctly if you do not want to end up with a million songs labeled “Unknown Song” by “Unknown Artist” in “Unknown Album” — unless you have an iPod Shuffle and enjoy it, of course.

Since about 5 years, Spotify and Deezer have changed the rules once more: instead of curating your own MP3 collection you can rely on other people doing it for you. They take the time to sort things out, put the right covers, search for the lyrics, find links to the band Wikipedia page, etc.  The really exciting part is that this kind of service holds a million more times what you could possibly store at home, and they keep storing new artists every day. If you want to discover new talents, there is no way you could reach that with your personal MP3 collection. Disclaimer: I have no part in those services, I am not even subscribed.

You have to admit that kind of thing goes into the right direction for the environment. When I hear that an artist has sold 2 millions copies of an album, I cannot help but think of how many tons of plastic have gone into making discs, to distribute the same amount of data to a large audience.  Instead of all having terabytes of MP3s at home, isn’t it be more sensible to store everything into the same unique pool and make it easy for everybody to access the pool remotely? This is exactly what Amazon and Google Music are doing.

We cannot remain blind to the main issue though: how do we fund artists?  It does not take much insight to see that a business based on scarcity of physical goods has no chance against electronic goods that have no cost to store and copy.

A famous post written by Courtney Love summarized the situation in the early 2000s: Courtney Love does the math
tl;dr: Out of the millions generated by her gigs, she and her band only succeed in making a modest revenue. The rest is eaten up by majors.

Tough time for CD vendors, but the fact that the current model of selling CDs is dead does not have to mean there is no other choice. Looking at recent stats, it seems more and more artists are getting most of their revenues from live performances and various merchandising items sold on the spot: T-shirts, mugs, and the inevitable band posters.

See this post dated Nov 2013 about shifting artist revenus over the past 15 years:  http://www.digitalmusicnews.com/permalink/2013/11/20/shiftingsources

I have absolutely no trouble with this model. Again: music wants to be expressed and shared! Live performances are the perfect incarnation of this fact.

Business is going to be tough for people who produce CD-only pieces, things you cannot easily share and enjoy in a live performance. It does not mean they have to cease their activities though. Other models based on free contributions have also been quite successful in many cases.

Music can be distributed under permissive file-sharing licenses (e.g.  Creative Commons). See sites like http://www.jamendo.com/ Other artists have decided to offer their songs for free download from their own web sites (e.g. Radiohead) and invite their fans to contribute whatever they want, aka the beggar model, or as Courtney Love put it: “I am a waiter”. Others are asking for funds
through Kickstarter equivalents for music. You name it. Compare that to the emergence of radio broadcasts: music was suddenly free and available to all without limits, and yet it survived and generated a huge music recording industry. Some variables have changed but the issue remains the same at heart: let us enjoy your music and we will find a way to fund your next album. You will probably not become as rich as Madonna or Michael Jackson in the 80s, but there should be enough for you to survive.

Another major shift happened recently: the price to pay to record an album is now so low that just anybody can do it with a consumer-class computer at home and get a fairly high-level quality. This certainly reduces the role of Music Majors even further. It used to cost a fortune to record a song, which is why you needed investors to create an album. Not anymore. The price of producing an album and distributing it through the Internet is so low that you do not need to involved bankers and contracts. Just do it over the weekend with the same computer you use to play Starcraft 2 and you are done. What do we still need those record companies for, then?

Lowering the barrier for entry has had consequences. As Moby put it: you have a lot more mediocrity on the market, and real talents are drowned in a flow of bad music. I take this point, but removing top-level executives from the chain of decision can only increase the diversity of what we are hearing, and that is a good thing. Between 1960 and 2000, everything you heard was carefully selected by a small bunch of old white people who made all the decisions about who had a right to be popular. Removing this bias opens the gates to mediocrity, but also to many more talents that would have remained silent.

If you are interested in the topic, you may enjoy this documentary: PressPausePlay dated from 2012.  A lot of the points I touched above are reviewed with a lot more data.

Written by nicolas314

Monday 13 January 2014 at 12:53 am

Posted in music, network storage

Tagged with , , ,

Insanely Large Machines

leave a comment »

Just saw this xkcd being pointed to by ESO today: xkcd Telescope names.

The Very Large Telescope is a project I was lucky enough to help bring from its infancy to daily operations. In the crowd of engineers who worked on that project, Daniel Enard was considered by most as the spiritual (and technical) father of the VLT. I remember him mentioning once at lunch time that he picked the name Very Large Telescope as a temporary placeholder and it finally stuck for lack of finding a better one. Engineers may be creative but not so good at marketing, it seems. We heard a bazillion jokes about astronomers having something to compensate and Very Large finally ceased being ridiculous as we heard it again and again. Everybody worked on the VLT and that was it.

When the next project had to be named, the term Extremely Large Telescope was naturally coined in reference to being bigger than very large. After all the jokes we had about Very Large it only seemed fitting that we would start again on Extremely Large. Half a bazillion jokes later, everybody talked about ELT and we all forgot about the biggerness, again.

When the next project started emerging from a few busy brains with too much time on their hands, it was obvious they had been looking to extend the pattern towards biggerness. The name Overwhelmingly Large was probably coined one afternoon in a Biergarten after one too many beers by a team of French and Italian guys, so there had to be something gross and funny about it. When they announced it for the first time in the ESO auditorium there was an outburst of laughter and people roaring in their seats. This time the name stuck. Not for lack of finding a better one, but because the pattern had become a real tribute to largeness. Prepare more jokes.

I thought OWL was pretty cool and the project quickly got a mascot. Project was never cancelled because it was never really started or achievable, it would have bankrupted a few European countries for a too risky enterprise. We kept the jokes though.

Fun to see it revived in an xkcd comic after all these years.

 

Written by nicolas314

Saturday 23 November 2013 at 1:10 am

Posted in astronomy

Tagged with , , ,

One-time file-sharing

leave a comment »

oneSay you rent a box somewhere on the Internet. You installed Debian stable on it because you want it to be nice and stable and run a few daemons that are useful to have online. Could be to hold your vast music collection, family pictures, or use it as remote storage for backup. Imagine you wanted to share some of the files hosted on this box with your relatives, who may or may not be computer-literate. Most of them would know how to use a webmail but asking them to install an ftp client is just beyond reach.  Obviously, you do not want to give these guys too many rights over your box (like an ssh access for scp). What are the solutions?

Setting up a dedicated HTTP server

Simple enough: set up an HTTP server to distribute static files. lighttpd is simple enough to setup in a couple of minutes and is very efficient for static stuff. But you do not want to distribute your files to the whole Internet. Sooner or later a web spider will crawl in and index your family pictures and all sorts of things you never meant to be public.  Next step: configure password-protection on the server

Fair enough. Now you have limited file downloads to people who know the password — provided they know how to enter a password. Do you create multiple accounts, one for each of your peers? It would be preferrable, otherwise you will never know who downloaded what. But then you have to communicate their passwords to your peers and make sure they have a procedure in case they forget it. You know you are headed straight to massive butt pains.

Second issue: passwords can be shared. You shared that 2GB movie with a couple of friends and a couple of weeks later you find out that there are currently 1,549 active downloads for this file. Sharing is in human nature and that is completely Ok, but you probably did not sign up to become a content distributor over the whole Internet, only with a couple of friends and relatives.

Next step: use one-time authentication

There are better solutions out there: since you only mean to share one single file (or set of files) each time, you do not need to create accounts for your friends. You give them a one-time download token and forget about it.

A one-time download token is a URL. It looks like the kind of URLs you get from URL shorteners with the funny string at the end. Something like http://shortener/12398741

One-time tokens can be shared but since they can only be used once, the person who shared it has lost it. The token is randomly generated and invalidated immediately after it is used to avoid having robots automatically scan all possible URLs in a row until they find a valid one.

There are many ways to achieve this on regular HTTP servers. Apache probably has a million configuration options for user authentication, including one-time passwords or something similar, but I have to admit I did not even try. I already wasted enough of my life in Apache config files. lighttpd can be configured to do that but the only solution I found required some Lua scripting and I did not feel up to the task.

Next-step: Do It Yourself

After reviewing countless pages of configuration options for various HTTP servers, I decided that it would be shorter for me to implement this in a tiny web app rather than try and understand complex configuration options.  My first iteration made use of a Python FCGI script in web.py attached to a lighttpd process. Pointing out static files from a Python web app to the embedding lighttpd process is reasonably simple.

This implementation suffered from a number of pitfalls though. For one thing, performance was bad. For some reason, the Python process would eat insane amounts of CPU and RAM when sending big files, slowing down the server to a crawl. Second showstopper was the complexity involved for such a simple setup. I had to write a Python script to generate the lighttpd configuration file with a number of deployment options: where to put config files, log files, static files, port number, etc. And then came the inevitable issues with dependencies: Python version versus web.py version versus lighttpd version.  Some combinations worked fine, some did not.  Nothing specific to Python or lighttpd, but the more you have gears, the more you have places for grains of sand to fit in.

I still survived with this setup for a year or so, when Go came in. I have already reviewed the language in the past and will not come back to that, but suffice it to say that developing HTTP servers in Go is the most natural thing. Adding the one-time token ingredient to the soup was implemented in just one evening.

Once rewritten in Go, I found out that the end-result was about just as big as the Python implementation, excluding the script that created the lighttpd config. The main difference was of course that I do not have to maintain cross-references between package versions for Python, lighttpd, and web.py, since there is only one dependency to cover: Go itself.

It was straightforward to enhance the program to support more options, respond to favicon, and handle a JSON-readable database for active tokens.  Performance is astounding. The serving Go process never takes more than a few megs of RAM (about the size of the executable itself) and only uses tiny amounts of CPU since the process is mostly I/O based anyway.

There is one thing I should have foreseen and had to re-implement. I am sending the one-time links by email and more and more people are reading their emails from their smartphone or tablet. Many just clicked the link without thinking twice, triggering a 2-4GB download and killing both their mobile and data plan at the same time. Wrong move.

The next version features a two-time download page: the first link sends users to a page summarizing the download, offering a second link to actually start the real thing with a big warning about the size of what will actually be sent.

There are many other features I would like to add to the current version, and I am hoping other people have better ideas for new features, which is the reason why I shared it on github. Find it here:

https://github.com/nicolas314/onetime

Since we are talking about sharing private date between friends and relatives, protecting the download may be a good idea. A recently added feature was support for HTTPS. You only need to point your config to server certificate and key files and off you go. The HTTP/HTTPS thing is completely handled by Go.

The resulting program is far from top-quality but it fulfils the needs. Go give it a try if you want to. Careful though: it will only work on Linux boxes for now.

Written by nicolas314

Wednesday 24 July 2013 at 9:18 pm

Posted in fun, go, programming, webapp

Tagged with , , ,

Starbugs

leave a comment »

The night was clear, we would not be playing Bomberman in the VLT control room that night. Clear skies and a sub-arcsecond seeing meant we would have a full batch of data to process every hour or so until the next morning. Once the calibrations had finished, the telescope operator launched the first observation. I re-compiled the whole processing software once more, just to be sure we had not forgotten anything, ran a series of unit tests for good measure, and waited in front of my screen for the first incoming set of frames to appear on the local disk.

First batch of sixty frames was completed after exactly sixty minutes. As the machine started doing its number-crunching, everybody in the room turned to me, waiting for the first processed image to come out. It took a good fifteen minutes for all algorithms to run through the set: calibrate all frames, remove the infrared sky, take into account bad and crazy pixels that have been hit with cosmic rays during the observation, register all frames to a common position, and finally stack them to a single image. The final result appeared on the screen above me and I could see smiles all around. It seemed the results were up to what my customers were expecting.

Now we had a clear image of a set of bright object against a dark background. In order to assess how much infrared light is emitted by each object, it needs to be calibrated. Somewhere on the image is a standard: a star with precisely known photometry in the wavelength we had been observing. Compute how many photons were received in this image from this star and you can deduce the magnitudes for all other objects present on the same frame.

I checked once more the final frame position on the sky and then launched the photometry calibration routine. The standard star was found and identified by name, its photometry computed by integrating all received light in a small surrounding radius, and then all objects in the frame were suddenly known by magnitude rather than number of photons. Perfect score! With a sigh of relief, I finally pushed myself away from the desk and reached for some water. The memory routines had done their job, we did not crash in flight by lack of RAM this time. Eleven more hours to go and then we could all go to sleep.

Next incoming data batch was processed just fine. Another image emerged. And then another one. It seemed everything was working perfectly fine.

Around midnight, something weird happened: the result image was correctly processed but photometry calibration failed because it found no standard star in the frame.

– What? Emilio, did you include a standard in the last observation?
– Let me check… Yes I did. You should have it somewhere around the top-right corner.

The standard star was indeed there, so why did the photometry calibration routine fail to find it?
I immediately opened the database we had for infrared standards and started searching frantically for the star, finding it immediately. I reached for the debugger and re-ran the whole routine once more with breakpoints. Confirmed: the search for standard stars in this region returned nothing, and yet the database was correctly loaded and completely in memory. The debugger showed what seemed like correct values for star positions, but the search function failed for some reason.

The star database we had was pretty simple: a simple text file containing named columns: first the star name, then its position on the sky as Right Ascension and Declination (a couple of angles), and then its magnitude at various wavelengths. Something like:

# Name | Ra         |  Dec      | Sp |  J     |    H   |   K    
AS01-0 | 00 55 09.9 |  00 43 13 | -- | 10.716 | 10.507 | 10.470 
AS03-0 | 01 04 21.6 |  04 13 39 | -- | 12.606 | 12.729 | 12.827 
AS04-1 | 01 54 43.4 |  00 43 59 | -- | 12.371 | 12.033 | 11.962 
AS05-0 | 02 30 16.4 |  05 15 52 | -- | 13.232 | 13.314 | 13.381 
AS05-1 | 02 30 18.6 |  05 16 42 | -- | 14.350 | 13.663 | 13.507 
AS07-0 | 02 57 21.2 |  00 18 39 | -- | 11.105 | 10.977 | 10.946 
AS10-0 | 04 52 58.9 | -00 14 41 | -- | 11.349 | 11.281 | 11.259 
AS13-1 | 05 57 10.4 |  00 01 38 | -- | 12.201 | 11.781 | 11.648 
AS13-1 | 05 57 09.5 |  00 01 50 | -- | 12.521 | 12.101 | 11.970 
AS13-3 | 05 57 08.0 |  00 00 07 | -- | 13.345 | 12.964 | 12.812 
AS15-0 | 06 40 34.3 |  09 19 13 | -- | 10.874 | 10.669 | 12.628 
AS15-1 | 06 40 36.2 |  09 18 60 | -- | 12.656 | 11.980 | 11.792 
AS15-2 | 06 40 37.9 |  09 18 41 | -- | 13.711 | 12.927 | 12.719 
AS15-3 | 06 40 37.9 |  09 18 19 | -- | 14.320 | 13.667 | 13.415 
AS16-0 | 07 24 15.3 | -00 32 50 | -- | 14.159 | 14.111 | 13.305 
AS16-1 | 07 24 14.3 | -00 33 05 | -- | 13.761 | 13.638 | 13.606 
AS16-2 | 07 24 15.4 | -00 32 49 | -- | 11.411 | 11.428 | 11.445 
AS16-3 | 07 24 17.2 | -00 32 27 | -- | 13.891 | 13.855 | 13.818 
AS16-4 | 07 24 17.5 | -00 33 07 | -- | 11.402 | 11.106 | 11.043

J, H, K are infrared bands corresponding to a relatively narrow wavelength range.

Something went wrong in the star-loading routine, so I loaded the whole set into memory once more and dumped it back to a text file to plot it. The results were not particularly obvious:

catalog

Somebody in the room came up to the screen and asked what we were looking at. I said: “these are the positions of all known infrared standards we have. For some reasons we cannot find tonight’s star in here.”

Looking at it again, I found our star. It was not in the right position. It should have been below the x axis but had shifted symmetrically above it. Looking at the data set again, the Declination was indeed negative: something like -00 14 41, but it was plotted on the wrong side of the x axis.

And then it dawned on me: the star was plotted at +00 14 41 instead of -00 14 41.

How do you read numeric data in C? Using scanf(). When you scanf() for “-00”, what do you think ends up in memory? Zero. Positive zero, since it is technically the same as negative zero. Except the angle has now been flipped around the x axis.

Right: plotting a denser set of stars revealed a clear white patch for Declinations between zero and minus one. I had just forgotten to take into account the first character as a sign since scanf() does not make any difference between “00” and “-00”. Once I corrected the database-loading line, everything fell into place and photometry computations could take place as expected.

Interestingly enough, it seems the same bug hit a large number of GPS devices over the past years. The German C’t magazine told the story a few years back about somebody who planned a bike tour around Bordeaux and ended up with intermediate points in the middle of the ocean. Bordeaux is located around longitude zero (Greenwhich), so you do have data points located at an angle that starts with -00. In effect, you could see all points correctly plotted on the map except for the ones located between zero and minus one degree, which flipped over the other side of the meridian. As soon as I saw the map I knew exactly what had happened.

At least the guy was clever enough not to bike into high waters. It could have been worse: though probably related to time manipulation errors rather than angles, you may want to read how F22 Raptors spontaneously rebooted upon crossing the international date line:

F22 Raptor gets zapped by international date line

There are some assumptions you should not make about handling time in software. Some of them are presented in this blog article:

Falsehoods programmers believe about time

Time and angles can be tricky scalars.

Written by nicolas314

Wednesday 26 June 2013 at 12:39 am

Teaching in 2013

leave a comment »

homework

My 11-year-old son came back home a few weeks back with a home assignment about the Internet for his technology course. This being a French school of not particularly good level, I did not expect much in terms of teaching. I was still surprised by the fact that most questions remained open and could lead to interesting discussions. Which we did: he turned out to be a lot more interested in the topic than I thought. We spent a couple of hours discussing about Internet, freedom of speech, copyrights, and digital property.

I read the first question: “Can you say anything you want on the Internet?”

My son’s reaction was: “No! You told me we must not use hate speech or insults on social sites and such.”

“Ok, but we still have freedom of speech in Europe, remember? This is actually part of the declaration of human rights. Why should it be different on the Internet?”

He thought for a while and said: “but still, if I say horrible things about someone on the Net I will get in trouble, no?”

“Yes you will, and there are laws against hate speech. It does not mean you cannot speak your mind, because freedom of speech is a foundation of our modern society, but you will have to face the consequences.”

“So what should I put as an answer?”, he asked.

“Let us word it this way: in countries that guarantee freedom of speech, you are free to say whatever you want on the Internet. I believe this is the only technically valid answer. By the way this does not only apply to the Internet.”

He read the next question: “I want to download my favourite artist’s album. Do I risk anything?”

The expected answer to this oriented question is probably something like: “Oh no! Internet is bad for music artists!”, so I tried to work around it. I told him: “Some musicians dedicate their lives to their music. These guys expect you to pay them for it through the sale of music-related items. In the past it used to be all about buying discs on vinyl, or tapes. Today they make a lot more money on concerts, T-shirts, and all kinds of merchandising shit. Some musicians do not even expect you to pay for recorded music, even if they officially sell it. And then you have artists distributing their music under a license that allows you to share it as much as you want without fee. These guys have understood that if they want their music to be heard they should let their fans do their communication for them by distributing songs as widely as possible. It brings more people to live concerts in the end, so more money to them.”

“Right. So how do you make a complete answer?”, he asked.

“Say this: for artists who expect to sell recorded music, you may get into trouble for downloading an album without payment. If the artists share their music freely (e.g. jamendo.com), there is no issue.”

He read his assignment further: “Ok, next question is: I like to burn my own song compilation on CDs, is this legal?”. He thought for a while and said: “I don’t get it. What does it mean to burn a CD with songs?”

Ah yes… We are talking about somebody who was born after 2000, never owned a CD player or an audio CD, has probably never seen his father use music CDs, but has his own iPod with a bazillion tracks, more than I possibly had tapes when I was a teenager. It took me some time but I found a bunch of blank CDs hidden under a pile of dust and explained you could store 60 minutes of music on such plastic slices and that his fourth-generation iPod stored about 250 of those. He seemed a bit surprised but not so much.

“So people used to carry around CDs for music with just, what… 10 songs on it?” he asked.

“Pretty much. And we had tapes before that.”

“Oh… So what should I say there?”

I really wanted to write something like: “What is a CD?” but we are still not far enough from 2000 yet. Anyway, I looked it up and making your own mix tapes is (at least in France) considered fair use. My answer was: “Yes, if it is only for my personal use.”

Next question: “My friend sells CDs with music he downloaded. Is this legal?”

Ok, I’ll play. “If the music he downloaded is distributed with an appropriate license, this is perfectly legal. This is often not the case for popular artists though.”

The next discussion took us through Creative Commons and what sharing was about. You remain the copyright holder of what you produced without having to declare it to an authority, and then you get to choose under which license you distribute your creations. A lot of people share their creations with a liberal license that even allows other people to re-sell them. Popular artists being handled by music majors are still stuck in the past with exclusive licenses and distribution through physical media only.

A couple more questions about the Internet in general and we were done with it. The teacher wanted to nail the point and asked to be sent the assignments by email… in Word format. So be it.

Couple of weeks later, my son comes back home furious: he got the worst grade in his class. Not only that, but he started a lively discussion with his teacher that ended up with both accusing each other of outright lying.

My son asked him: “You said I was wrong when I said anybody can say what they want on the Internet. Does this mean freedom of speech does not apply to the Internet?”

The teacher replied: “Ok. you can say whatever you want, if you really want to spend the rest of your life in jail, go for it.”

“So you do admit there is freedom of speech, right? Why did you correct me as wrong in my paper?”

The teacher was caught unprepared on that one and refused to take it further. It was not the answer he was expecting and did not want to spend any time elaborating on this.

My son insisted: “Second question: I do not see how I could be more precise. Tell me what is missing?”

His teacher was apparently not aware of Creative Commons or equivalent licenses. My son tried to summarize our discussion about creations and licensing but he was quickly interrupted. “Bullshit! Downloading music is just against the law!”

From that point on, my son just gave up.

The past fifteen years of parenting have taught me at least one thing: you do not mess up with the feeling of injustice with kids. My son was just outraged that his teacher was so much left behind that he would not even try to discuss these points with 11-year-olds who have obviously spent some time thinking things through and brought back topics of discussion rather than ready-made answers.

“You know what?”, I told him, “your teacher has obviously prepared these questions 10 years ago and did not realize the world has been moving since then. You probably know more than he does on the topic right now and for that I am incredibly proud of you. Forget about the grades, nobody gives a shit. Be proud too: you know more than a 40-something who is supposed to teach the damn thing.”

I did not even try to meet the teacher. His role is not to teach the Internet to 11-year-olds who anyway all know more about it than he does. His role is to give homework and put grades according to pre-written answers that are older than the students themselves.

Maybe when you teach Latin, but Internet moves quite a bit faster than a dead language.

I am still toying with the idea of sending him the bill for a Microsoft Word license though, but to be fair: we used Export as docx from Google drive.

Written by nicolas314

Friday 21 June 2013 at 10:59 pm

Sorting Certificates

leave a comment »

As I went for a smoke the other day, I found two colleagues trying to solve a puzzle they had to code. The game is the following: first you get a list of certificates belonging to Certification Authorities. A certificate is a list of key/value pairs that are expressed in a canonical way in binary (in a format called ASN.1) and then signed with a cryptographic key. Among the key/value pairs are:

  • A name for the identity corresponding to this certificate, or DN for Distinguished Name
  • A name for the entity that delivered (signed) the certificate: Issuer name
  • A serial number that is unique for this Issuer+Certificate
  • Validity dates: valid from and valid until
  • … and a bunch of other fields that are irrelevant for this issue

Certificates are always delivered by a Certification Authority (CA) except the ones for Root CAs that are self-signed (or self-issued), in which case Issuer and Issuee have the same name. The way Certification Authorities work, you normally start by creating a Root CA then issue certificates for subordinate CAs (subCA) that are themselves in charge of creating their own CAs, or just issuing certificates to end-users, machines, or applications. CA hierarchies may look like this in their simplest tree-like approach:

CA hierarchy

CA hierarchy

Now you received a list of unsorted certificates and you are asked to sort them out so that any CA certificate must have its Issuing Root CA on its left. If there are multiple roots, they are allowed to appear anywhere in the list as long as they are left of their daughter CAs. How do you sort them?

A very straightforward approach would be re-building the CA tree. Find out Root CAs: they are easy to identify as their issuer is themselves. Then parse all remaining certificates and find the immediate daughters for Root CAs you already have. Parse again and re-attach in a tree-like structure, sorting siblings together. Once you have a sorted tree, iterate on all root CAs, then subCAs, etc. until you reach a terminal node, i.e. a CA that has not issued CA certificates itself.

Fancy, but that requires some tree-like structures in memory that may be tricky to get right on the first attempt. I also did not like the fact that emitting CAs in a list would probably have to use recursion to remain elegant. I have very bad memories of recursive algorithms in production, I have seen stacks vaporize in flight more than once. Sure, they can be translated to iterative methods but then forget about elegance.

My colleagues were looking into fancier ways of achieving the same result by designing some kind of clever sorting algorithm with a bit of memory to end up with a sorted list in a limited number of passes. When I joined them they had just found a sort in O(Nˆ3). I tried to understand their method but just could not figure it out.

I thought about it for a moment and got one of these a-ha! insights:

“Guys, have you tried sorting the input list by validity date? Since a daughter CA is always younger than its parent, just sort on the valid from field.”

Problem solved.

Written by nicolas314

Thursday 20 June 2013 at 11:31 pm

Sold my Soul

leave a comment »

now

My soul is now officially sold to Google since I signed up for Google Now on my Nexus 4. The terms and conditions initially scared me to death. Long story short: you sell your soul and give up the last shreds of privacy you might have had. I can only hope this data trail will never be used against me for nefarious purposes.

So how does it work and what do you gain in exchange for your soul? The price to pay is to leave your GPS constantly switched on. Your phone is also constantly listening to incoming Wi-Fi Access Points, even if you are not connected or trying to attach to one. This eats up your battery even faster than usual, but I could not spend a complete day without charging at least once anyway so this does not change much. What you gain is instant positioning no matter where you are. If you feel lost in a city (happens to me quite a lot), just switch on Google Maps and get an immediate fix. Coupled with contextual search, it means you can whip up your phone, whisper “bakery”, and get directions for the nearest one in less than a second. Nice.

What makes Google Now even nicer is the long list of heuristics they have attached to these data. With just a one-day data set, you can tell where I live and where I work since I repeatedly spend night-time without moving and day-time at work, moving a bit. You could also tell which are my favourite restaurants at work and how often I visit them. You can tell where I shop during the weekends, or how often I go get my kids at school. You could also track customers and partners I have business with, and know how often I go through interviews with headhunters to find another job. But I digress.

The Google guys have attached events to your presence in various locations and take advantage of this to offer you some advice. Let me give two examples:

Friend of mine has a guitar course on Wednesdays at 7pm. He usually takes a train to work but the guitar course is a bit off-center so he takes his car. One week after switching on Google Now, he got a message the second Wednesday around 6.30pm to warn him that with the current traffic conditions, he should leave now to be on time for his 7pm appointment.

I was on a trip to San Francisco last month. Two hours before my scheduled departure time, my phone rings an alarm telling me I should go now to be on time, together with traffic conditions and directions to the airport. Even better: on my first day there I slept in a hotel and went to work the next morning around 9am. The next day, I get an alarm from Google Now around 8.30am telling me that if I want to go to the same address as yesterday, I should leave now because of the traffic on I110. I was a bit dazed and looked at my phone with a large WTF across my face.

From your speed, Google Now also knows if you are walking, cycling, in a bus, in a train, on a plane, or in a car. At the end of each month you get a summary about how much you walked and cycled, which is a nice touch when you try to loose some weight. Next step would be to connect it to the device I stick on my chest when running so that I know exactly how many calories I loose per session.

Google Now is also connected to various city transportation sites. When you get close to a station, it automatically displays the time tables for the next coming buses or trains. It does not work with tramways in Paris but I was told subway and buses should be Ok.

When traveling abroad you get a card showing the time it is at home, another one providing exchange rates, and yet another one offering translations to local languages. As I was in London last week, the whole interface switched to a London theme, complete with Big Ben and Eye of London. That was a fun touch!

This is incredibly useful but also totally scary. It means my private data are stored somewhere in Google’s centers. What protects it for now is the fact that millions of people are tracked in the same way and I have no reason to appear as anybody special. What scares me is how these data could be one day used against me for whatever reason. Imagine a European dictatorship deciding that anybody who worked in the Bay Area is a potential terrorist, or simply a competitor who would like to know which companies I have visited there. Collecting data is harmless. The danger comes from who uses it and for which purpose, and I have absolutely no control over who accesses my data and what they want from it.

Once you get past these privacy points you do enjoy these location-based services. Even if these are just frivolous for now, I cannot help but think of a recent time when I did not have a frivolous smartphone either. Who can tell what these will bring us next?

Written by nicolas314

Sunday 12 May 2013 at 1:10 am

Posted in android, google, mobile

Tagged with , ,

Minecraft kids

with 3 comments

If you have not yet heard about Minecraft, you owe it to yourself to just give it a try. The game can be played for free in many ways, though you can always opt to contribute 20 euros to Mojang to thank them for their efforts and creativity. Being a paying customer also gives you access to multi-player games on public servers, though that could also be achieved for free with a bit of tinkering. But hey: 20 euros for literally hundreds of hours of gaming is really nothing. Compare that to the price of a small Lego Star Wars box for instance. Just try the free version and see for yourself.

Simply put: Minecraft is just as fun as Lego in real life, except this all happens on computers and you do not risk loosing a toe stepping on pieces in the dark, teaching your kids a whole lot of new swearwords in the process. Minecraft takes place in an unlimited virtual world composed of big blocks that stick to each other by magic. There is nothing else to it, really. Blocks have textures, they can be transformed, attached, dug, destroyed, but what you mostly do is shove them around. The main player’s interface is a first-person view with an inventory and that’s it.

The initial story was a bit more elaborate: you were left on your own on a desert island and had to build your own shelter before night when zombies attack. You could combine blocks of different kinds to build objects or other materials, e.g. burn sand to produce glass, or combine woodsticks to create a pickaxe.

Since about a year, the game added a new dimension by introducing a creative mode. You can now fly around the world and have unlimited access to all building blocks you want. Forget about zombies and monsters, if you want to fight bad guys there are much better games than Minecraft out there. Creative mode is all about making stuff. Start from scratch, have ideas, make something ugly, destroy it and start again or enhance your initial ideas and reach something you can be proud of. Get ideas from other people and build your own world.

Minecraft in creative mode sounds a lot like software writing, in more ways than one. You start with great ideas, you cut it down into manageable components, perform side experiments, try a prototype, then you realize you had it all wrong from the start and begin anew for the best, getting better at it in the process. Of course this is the same thrill you had as a kid when creating your own toys from random Lego pieces. All engineers will tell you Lego was the first step in the direction of learning about the pleasure of building things.

Minecraft is not the first game to try and adapt the Lego concept to a virtual world, but it is definitely the first successful one. Douglas Coupland described something very similar in Microserfs in 1995 http://en.wikipedia.org/wiki/Microserfs

Technically: Minecraft is a pure Java application which is supposed to run everywhere. In practice you get different bugs on various platforms but all in all you get more trouble with the Java runtime than the game itself, and Oracle is not making this any easier with each Java release. If you do not know how to install Java, I suggest you find a helping hand and make sure they are ready to maintain your computer every couple of weeks or so. From my experience: I spend more time fixing Java installs than anything else in the neighborhood.

What neighborhood? Let me explain.

My kids started playing Minecraft about a year ago. We have several computers at home and they would each build their own world on a different machine. Problem started when they wanted to switch machines for whatever reason. I first started by putting their save files on a network share but that turned out to be quite a mess and they were still each locked into their own world. First attempt: start up a Minecraft server on a local Linux box and have the kids share a world through it. Worked for a while, but it required booting a noisy Linux box in addition, even for just one player. Not hard to do but certainly an obstacle.

Second attempt: use a rented box at OVH to host the Minecraft server. This way it is always started and who cares if it is noisy? Inconvenient: needs a working Internet connection to play. Advantage: the kids can play from wherever they want. Of course the word spread around pretty fast. We shared the server address with a lot of friends, to the point that I had to install a white list to limit incoming users to people we know, or at least friends of friends.

There are implicit rules on this server:

– Never destroy anything you did not build yourself
– Do not use dynamite or spawn explosive monsters
– The rest is left to your imagination

Since I am running a vanilla Minecraft server there is no possibility to ban the use of monsters or dynamite so I have to trust the kids to behave. It does not always work but in general we do not have too much vandalism to deplore. There are compatible versions sporting plugin systems that allow banning this or that, but I just could not figure out how to use them with the existing world we have. Oh well.

Kids are completely unattended on the server. Ages range from 10 to 15, boys and girls, and everybody is completely responsible for their actions. I made very clear that I am no Deus Ex Machina and will honor their requests as an Admin without questions. Over the past six months this server has been blooming beyond my wildest expectations. Crossing it from side to side takes maybe 10 minutes flying full speed, and every single inch is built. Screenshots along this page will show you how creative kids can be when left alone. It is one thing to see a YouTube video of a Minecraft castle, but building one yourself from scratch with 2-3 friends is really an experience. And if you end up with a half-broken castle with too many towers and no doors, who cares? The kids are so proud with what they have accomplished, it is a real treat to observe.

They did not just create buildings or statues, mind you, they also created arenas to fight monsters, water chutes, mazes, and all sorts of other games for one or more players. From a construction game, this has turned into a game construction game.

As I recently added a name to the server white list, I noticed that most kids playing on this server are now completely unknown to me. When walking around the neighborhood, I sometimes get greeted by young’uns I have never met before, who let me know the server needs to be updated or restarted.

Last week, my son got home and told me he was meeting friends in Minecraft to build an underground castle. As I inquired “friends from school?”, he answered: “yeah, most of them. Some I don’t know”. He told me they had an appointment at the Market around 6pm. Market? Yes, they named a lot of places. You would not know because they did not bother putting signs, but they have names for all landmarks. I realized at this point that this is much more than just a game server. For many of these kids, this is an open window into another world. Their world.

Maintaining the server alive has now become a crucial question. And I should certainly mind my backups to avoid catastrophes.

Here are some landmarks they achieved over the past year.

ImageImageImageImageImageImageImageImageImageImageImage

Pretty amazing stuff. And lots of patience, too.

Not sure what the kids will get from it in the end, but this is certainly building more than just virtual houses on a remote server.

Written by nicolas314

Tuesday 8 January 2013 at 12:18 am

Posted in fun

Tagged with , ,

Wilhelm Scream

leave a comment »

Today I learned about: The Wilhelm Scream. This sound effect has been used in nearly all action films since about 50 years. Need proof? Check out this compilation, including Star Wars, Indiana Jones, Batman, Disney classics, Spiderman, Lord of the Rings, Pirates of the Carribean, and countless others.

Posting this because I just heard it in the trailer for “The Hobbit”.

Written by nicolas314

Thursday 27 December 2012 at 9:37 pm

Posted in fun

Tagged with

Through the looking glass

leave a comment »

SEO

I recently followed a two-day course about Search Engine Optimization (SEO) in Paris. Interesting topic: I learned a thing or two about Google indexing, and quite a lot about how Google is perceived by a crowd who now purely survives on them.

SEO agencies are companies that will help your web site come on top of Google results to increase visibility, sales, and fame in general. The course was about tips and tricks to get your site up the ladder.

To put it bluntly: Google is seen as a god by SEO agencies. Actually: not just a god but the Only God on the Web. The first 10 slides were all about Google’s market share in the search business and why there is no need to optimize your site for anybody else. If you catch Google’s eye you have the Netz in your Pocketz. Fair enough, the point is valid in most Western countries. Other search engines just have to follow suit.

There were mentions of Google sanctions for those who do not behave: if your site is not a good netizen you first get sandboxed, then blacklisted,which means the end of your presence at Mountain View and by extension, the rest of the world. Apparently this is worse than getting your Internet access revoked, at least for the marketing crowd. Digital death is permanent.

The speaker kept freely exchanging the words Google and Internet as if they were the same thing. It really made sense in all his sentences but it was kind of scary, a late realization for me that an ad company is now completely ruling the digital world.

Problem is: ads are a fairly twisted version of our reality. Ads are all about marketing and messages to convey, no matter how much you have to deform it to make it fit. Humans, on the other hand, tend to live in their daily reality.

Not an issue, really, but the fact is: the more I look around and the more I see web sites getting formatted in exactly the same way everywhere. The aim is of course to please data munchers and indexers. Go check any recent corporate web site on any topic and you will quickly see the pattern. Place your logo here, your content there, put a site map at the bottom, create mini-sites for dedicated topics, separate dynamic from static content using robots.txt. The web is slowly formatting itself for Google.

Some recommendations were just lame. Examples:

When describing your products, use vocabulary that relates to your field.
Er… Guys, I need to talk about software, do you really think I would diverge into a conversation about lawnmowers and beer?

When choosing a title, make it descriptive of the paragraph it introduces.
Oh really?

Pump up as many keywords as you can into your META tags, this attracts search engines.
Oh wow. After years of success with PageRank, Google has apparently reverted to the dumb algorithm that killed AltaVista. Yet you would think these guys are smart or something.
Edit: Thanks to Mathias for pointing out that the Google search guys made it clear: Google doesn’t use the keywords meta tag in web search

Do not use more than one H1 tag per page.
This one kinda stumped me. What? But the speaker insisted: using more than exactly one H1 tag per page may get you blacklisted. To be fair this is not the first time I heard that one. If you google “SEO H1 tag” you will find numerous appearances of this with no explanation of any kind. The whole thing got somewhat debunked by a Google search engineer in a video: More than one H1 on a page: good or bad? (hint: they don’t care).

The speaker also mentioned an interesting practice: if you want to appear as number one in Google results, you may also want to drive down your competitors, something like a reverse SEO. Techniques include hiring a large number of questionable domain names and replicate your competitor’s contents onto those fake sites, or posting links to your competitors inside forums dedicated to weird discussions. If you have the resources you can also setup fake shady web sites that add horrible links to your competitors. Your fake sites may get demoted in the end but there is currently no law preventing anybody from doing this. I guess you could always sue for slander but once damage is done, your SEO may have taken quite a blow anyway. “Calomniez, calomniez, il en restera toujours quelque chose.”

There is a feedback loop at work here:

Google tries to index the web with a very pragmatic approach. Figuring out a generic algorithm for page ranking can become incredibly complex with millions of special cases that can only be manually designed to work around various site structures.

Web designers want to be better indexed so they try to figure out the ranking algorithm used by Google to defeat it.

Google notices this, needs to thwart gaming attempts, adds some rules for ranking to avoid spamming. Back to square one.

This is the same cat-and-mouse play we have seen happening in our mailboxes with spam filters and spam senders fighting for mail ownership. Now it is happening on your corporate web sites too!

I just love when a billion people are trying to outsmart one another.

Hopefully this will not turn the web into a gigantic uniform mass of information structured in limited ways. Vive la différence!

Written by nicolas314

Saturday 8 December 2012 at 11:23 pm

Posted in google, seo

Tagged with , ,

Humans control Machines control Humans

leave a comment »

mind control

I used to work for a company where IT issues could only be reported by email. No hotline: send an email, get a ticket number back, expect somebody to call you about the issue and negotiate a solution with them. This could have worked if the Help Desk reaction times stayed within reason but it sometimes took them days or even weeks before answering your request. One of the IT guys carelessly gave me a tip one day:

– Oh yeah, if your request is really urgent it should say so in the email you send.
– You mean in the Subject or the Body?
– Anywhere. The incoming filter puts it on top.

Gee… Let’s try:

“Dear HelpDesk. I need more quota on my Unix account. Nothing urgent”

Sure enough, I got a call within the next minute and was immediately granted more disk space.

I kept using this trick for a while and then somebody must have realized they were being cheated. When urgent stopped working I switched to this is not an emergency, which reached the expected result.

That is called gaming the system. Point is: once I knew my emails were first read by a robot, I could influence their priority by choosing my words accordingly. It got to a point where it would be pretty hard for a human to determine what the reported problem was, but my email kept popping up on top of the TODO list, which ensured a phone call from Help Desk within the next minute.

So there you go: a human controls a piece of software that controls how fast a human will respond to a human request. We are doomed.

Written by nicolas314

Friday 30 November 2012 at 11:16 am

Posted in fun, gaming

Tagged with , ,

Nexus 4 review

leave a comment »

Nexus 4

 

There have been countless reviews of Google’s Nexus 4 already. In this short one I will just try and give my opinion. In essence: this is the greatest smartphone I have ever seen. My job is all about smartphones these days so I tend to manipulate quite a lot of these. Honestly, this one is a piece of art. It delivers on all possible accounts.

Appearance

The object itself is gorgeous. A thin, black slab that holds steady in your hand without being too heavy or too light. Not really fan of the almost-invisible stars on the back but whatever. The screen is pretty big but not too much. It is still longer than any of my fingers, which forces me to use it with both hands. This may actually be a good thing: holding my phone single-handedly has given me some health issues with my left hand in the past, this is hopefully gone.
What really makes it unique to me is the waterdrop-shaped screen. Instead brutal angular edges, you get this smooth watery feeling. Touching and feeling the glass has never felt so smooth and pleasant. These things are more than just communication tools now, they are really becoming tiny intimate pieces of yourself.

Technically

Quad-core with copious amounts of memory. This phone is now officially the fastest computer in my house! The user interface jumps to your touch. Agreed: my first-generation iPod touch has always been that reactive with a lot less horsepower under the hood, but hey… can’t have Java and speed on the same device unless you boost processor speed and available memory. Using a quad-core for just a phone is a bit ridiculous but Ok, this is more than just a phone.

Battery life

Not tested enough, but seems Ok. After a day of intensive use I still see 30% left. My previous phone did not finish the day with just 3G on.

Android 4.2

Nothing really new there. The big jump was Android 4, this version is just polishing a bit and adding new bugs.

Storage space

I purchased the 8GB version and honestly do not even need so much space. My best friend for listening to music and podcasts remains my faithful first-gen iPod touch. If I really want to get music onto the Nexus I can start Google Music and access some of the 15,000 songs I uploaded there.

Watching streaming movies is certainly possible on Wi-Fi, but I’d rather use a tablet for that. No need to store these on my phone either.

Conclusion: go get it if you can!

Written by nicolas314

Thursday 22 November 2012 at 10:35 pm

Posted in android, hardware, mobile

Tagged with , , ,

The rise of OVH

with 2 comments

Technicolor TG788vn

My new Internet Service Provider is called OVH.

OVH only recently became an ISP. Until 2010 it was specialized in web hosting, later expanded to Virtual Private Servers and all sorts of hosted solutions like email, data storage, web servers and the like.

OVH has an interesting story. It was built over the past ten years with minimum budget and careful re-investment of all profits into their own infrastructure. The Wikipedia page about OVH does not really give many details. See also an article (in French) in Liberation about how the company was funded from literally nothing.

OVH is famous for democratizing netboxes in France. For 20 euros/month you have full root access to a Linux box somewhere in their data centers. Start by remotely installing an OS then configure it and use it as if it was somewhere in your cellar. For years now, French users have been using these as remote media and torrent boxes.

OVH later expanded to the UK and recently to the US too. They now have offices a bit everywhere and apparently just overtook the big German players in terms of size and equipment.

Switching

My experience switching from Free ISP to OVH ISP was quite painless. From registration on the OVH web site to a working Internet connection at home took me a week with just one single day of network interruption. They did not ship me a magic media box but a straight simple DSL modem including Wi-Fi, a 4-port router, and two RJ11 telephone lines.

Price

The price point is just the market-defined 30 euros/month. For that price you get non-filtered net access, two phone numbers, unlimited calls to many countries (including mobile phones), a bunch of email addresses in the ovh.fr domain, and unlimited storage through their Hubic service.

OVH does not try to give you a media center, their offer does not even include any kind of TV service. Their web page is only about their own services.

Performance

I observed a 10% increase in bandwidth compared to Free DSL, something I attribute to the absence of TV service. This will not make any difference but is always appreciated.

Weak points

Several points came up over the past week that were not handled so gracefully.

  • Double account: I now have two accounts with OVH, one for hosting and another one for DSL. At no point did OVH try to ask if I was an existing customer. Merging both accounts requires sending a transfer order on paper with a copy of an ID card. Hahaha… Won’t happen.
  • Paper procedures: OVH just loves paper. They provide a generic console to manage all services and in many cases you end up printing a PDF, signing it and sending it to them with a copy of your ID. Not sure if they do that to slow down some requests or if they truly did not spend any time optimizing their stuff. Certainly feels out of time in 2012.
  • Manual procedures: creating an email on the ovh.fr domain was apparently done manually, it took a couple of days and was subject to a number of follow-up messages on my “order”. Weird.
  • Bugs in management console: OVH offers two different management consoles: v3 and v5. v3 looks like Windows 3.1 and has about a million icons to describe all possible tasks. The icons are of course impossible to identify, forcing you to click endlessly on the many menus in order to find out what you are looking for. v5 is kind of spartan and buggy, but still officially in beta. Hopefully I should never have to touch the DSL configuration ever again.
  • Minimum 12-month commitment: I decided not to care about this, hope this was not a bad choice.

Conclusion

So far the good points largely compensate the annoyances: my home network finally works flawlessly with Google services. No delays on YouTube, Google Play, GMail, Google search. Escaping from the Free DSL circus was desperately needed and so far OVH really delivers.

Written by nicolas314

Monday 22 October 2012 at 11:30 pm

Posted in free (isp), isp, ovh

Tagged with , , ,

That is an odd-smelling cheese

leave a comment »

Smelly cheeseJust finished Who moved my Cheese? by Spencer Johnson. A quick and interesting read about how to be prepared for change and how embracing change in your life can make a real difference.

The followed metaphore is about four characters faced with a life-changing situation: they live in a maze, they find a roomful of cheese and decide to live there. Time goes on, the cheese gets stale. One day the cheese is gone and they have to travel through the maze to find a new cheese room. Two characters move forward and find happiness further down the maze, one character refuses to move and ends up all alone and hungry. The last character evolves through the story: at first he is stuck in his comfort zone, then he learns to move forward and finally finds new cheese.

The story is short and like other self-help books, only filled with enthusiasm and positive thinking. The book fails in many ways by being so far off from reality. Change is good, but change can also be dangerous. The maze is not always just an endless stream of walls, it can also hide terrible dangers and taking the decision to leave your comfort zone could be fatal. The whole difficulty lies in knowing exactly when it is time to move on and when it is too early.

This reminded me of a Nasreddin Hodja story.

One day Nasreddin is awoken by his father at 5am. His father says: “Come on Nasreddin wake up! The early bird catches the worm! Come with me to the market, we will be the first ones there to sell our stuff”.

But Nasreddin stays in bed and does not even acknowledge his father’s presence. He insists: “Nasreddin, starting early is the first step on the path to wealth. Get up!”

Nasreddin does not move.

His father says: “Nasreddin, you remember when I left home at 5am the other day? On my way to the market, I found a purse filled with gold. If I had not been so early, somebody else would surely have found it before me.”

Nasreddin opens an eye and says: “You know what? The guy who lost the purse woke up even earlier than you did.” With that said, he goes back to sleep.

The story in Who moved my cheese? does not hold water but at least it forces you to think about change. How do you handle it? When do you realize you will soon need to change? When does the cheese start smelling odd enough that you want to move forward? To which you could add: is it better to keep gulping down stale cheese or get eaten by a cat?

Written by nicolas314

Friday 19 October 2012 at 11:50 pm

Posted in change, fun

Tagged with , , ,

Goodbye Free ADSL

with one comment

Dear Free ADSL,

You provided me with Internet access at home for the past ten years, and for this I cannot be thankful enough. I believe it is now time to end our relationship and move forward on our respective paths. The choices you have made over the past two months are only yours to take and you are perfectly entitled to them, but you will understand I am also entitled to my own freedom from Free and will respect my convictions.

Free ADSL: you have now installed filtering rules on your network for all your users. Watching YouTube has become nearly impossible, GMail is dead slow and Google Search is oftentimes just unavailable. How could you possibly install this kind of rule and hope nobody would notice? For the past month I have used a VPN tunnel to work around your rules, but at some point I just have to give up. I have no idea why you set this up and why you are not communicating with your end-users on that topic, but in the end I do not give a damn.

There is this one thing called Net Neutrality and you have miserably failed on that topic. As an ISP, your raison d’être is to make sure your customers can access the Internet in the best possible conditions.

Free ADSL: when did you forget you are an ISP?

When you started focusing on upgrading your magic DSL box? When you started to make more money on content than on networking? You lost me on your way to becoming the next big media thing. What I want is pure and unadulterated access to any other machine on the Internet, I do not want you to start noticing who I connect to or what kind of data I exchange.

I recently packed my Freebox and waved it goodbye. My new ADSL provider has been working great so far (more about this in a next post).

Sorry we do not agree on Net Neutrality, but I cannot follow you on this path. I do hope you will not forget again what “Free” means.

Written by nicolas314

Thursday 18 October 2012 at 2:21 pm

Free Fall

with one comment

For the past 15 years, France relied mostly on three major Internet Service Providers for end-users like you and me.

Orange (earlier known as Wanadoo) is the big one with the lion’s share in terms of number of users. Orange is the newer name of France Telecom, previously a government-owned business attached to the ministery of Post and Telecommunications (PTT).

Orange used to be an administration in all its glory, and by all means it still works like one. Orange absolutely loves paperwork, they cultivate a need for endless bureaucracy fit for an administration stuck in the 1960s. The Orange website is a crazy nightmare designed by nobody, the result of an endless accumulation of specifications, layer upon layer of technological eras contributed by too large teams who never worked together. Orange still considers their website a fun way to attract customers, but their real business is still completely oriented towards animating a gigantic paper-shuffling machine.

In the early 2000s a new competitor emerged out of nowhere. Free Telecom, a brand of the Iliad company, literally defined the French DSL market by offering the lowest price ever for a complete set of home network services. Their flagship price point of 30 euros/month set in 2002 is still currently the norm 10 years later. Orange and the other competitors had no choice but to follow suit and start competing on quality and services, which profited everybody, users first.

Iliad was initially built upon the French Minitel success story, a glorious collection of corny ASCII porn sites distributed over cheap text terminals in the 1980s. Iliad later bloomed into hosting porn websites on the emerging young Internet during the first waves of eternal September.

Free came up with the infamous Freebox: instead of purchasing your own modem you would get this specialized DSL box. The box was a very smart move: it homogenized hardware used by Free customers to connect to their network, lowering the pressure on Customer Care and enabling full control over customer usages. Soon, all French ISPs were offering their own boxes.

Free made it mandatory to use their boxes to access their network. By declaring all Freeboxes property of Iliad, they thwarted tinkerers who might have been interested in customizing the hardware for other uses, possibly jeopardizing the quality of service and opening the gates to endless customer support. That was a really smart move, though it angered open-source buffs that Free never released the modifications they had to bring to Linux to run their DSL boxes.

Free was a real pioneer in France and an example for many ISPs in Europe. Free was the first ISP to offer more than just porn over the Internet: Free offered an email address to their customers and soon after a full webmail access. You could build your own web site with a 10 MB quota on an Apache server running the latest PHP version. Linux fans were delighted to find an ftp server mirroring every popular distribution with fast download rates.

Compared to previous offers, Free was a paradise for network addicts. Just before opening my account with Free I was still with Wanadoo on a 512kB/s DSL line. I called Customer Support one day to know if they had an NTP server I could use (there was no ntp.org at that time). The answer I got was epic:

– You want to setup date and time on your computer? That is easy: open the Windows settings menu at the bottom left, select Date/Time and… wait… it is currently 12:31 so you just set that.
– I am running Linux
– In that case shut down this Lunux window and go to the Windows settings menu and …
– I can’t. I do not have a Windows PC
– You… do not have a… normal PC?
– No I don’t. Look, all I am looking for is an NTP server to set the time automatically.
– Ok then. I will transfer your question to our tech team and they will come back to you in due time.

Two weeks later I received a cryptic email from their R&D team pointing me to the relevant RFC about NTP. Eternal September everywhere, it seems.

Word started to spread among the community that DSL was now available from a real network provider. If you were in tech you had to have a Freebox at home. And since most people did not know much about Internet in these ancient times, everybody and their cousin turned to the odd relative who works with computers. Free saved millions in marketing just by relying on knowledgeable people taking care of word of mouth advertising.

My first call to Customer Support was bliss:

– Do you recommend an NTP server?
– Sure, any web server hosted at Free is an NTP server. Just use the one hosting your personal web page to distribute the load.

The geek in me was just thrilled.

Blinded by the incredible — and legitimate — success of their flagship Freebox, Free kept heavily investing in designing and producing ever more powerful DSL boxes. The awkward pizza-sized box progressively got smaller, expanded as a home router, and was later augmented by a multimedia box dedicated to TV. What could possibly go wrong?

In December 2010 Free released their most powerful box ever: the Revolution came with a 250GB hard disk, a BluRay drive, integrated video games, and countless subscription to TV channels.

To me this sounded like the beginning of the end for Free.

Providing 250GB of storage was nice, but it came long after hard-disk sales had hit an all-time high. Lots of people already had a NAS or networked multimedia disk at home, why would they put their data on a box they do not even own?

BluRay seemed like a nice touch but seemed to ignore the fact that media sales are constantly decreasing. Buying a physical object to watch a movie or listen to music is already something from the past, quickly obsoleted by online services like Deezer or Spotify for music, and about a million offers for movies. You would expect a bit more strategy from an ISP about the currently ongoing so-called Cloud revolution.

Free’s business is about networking. Multimedia boxes are a nice touch, but what I expect from a network provider is network services. Give me a Dropbox, a remotely hosted Time Machine for my Mac, an Android market, online video games à la Steam, online radios, online communities. The Internet world is exploding with crazy ideas based on the infamous Cloud and Free just isn’t part of it.

In January 2012 Free decided to get into GSM as Free Mobile. As a Free addict I got a SIM the very first minute they were available. For a while it ran just fine until September.

For the past month my phone was completely cut off 3G. I initially blamed it on Android, but testing the same phone with different SIM cards clearly pointed to an issue with Free.

One major difference between Free and its three main competitors (SFR, Orange, Bouygues) is the fact that Free deployed very few antennas and relies on national roaming for service. As a result, my phone kept trying to reach a Free antenna and switched back to another 3G network because it could not find any. Battery life went down from a couple of days to less than a single working day. Network quality just collapsed to a complete halt in September.

A few more tests revealed that re-routing 3G traffic through a VPN or ssh connection helped me recover a much better bandwidth, though the lag was still hardly bearable. After a bit of search on Free forums, I found out lots of Free customers had reached the same conclusion and started mass-migrating to other 3G operators. Good thing with Free is that there have no fixed-term contract, you can leave any time so why bother? I did not even try to file a ticket with Free and just switched to another mobile operator. Same price, working service.

Free decided not to react to these complaints. Just as they do not advertize but use word of mouth, it seems they decided to handle a complete service collapse in the same way. I think they underestimate how much this hurts them in the long run. Loose your geeks and they will turn against you just as fast as they initially flocked.

Since a couple of weeks I also noticed an incredible lag on all connections to Google services from home on my DSL line. Android market takes age to download apps, YouTube is completely unusable, forget about Google Drive (former Google Docs). What tipped my over was the fact that I could not reach Google Mail through DSL over a complete week-end, relying instead on my 3G link.

Quick search: Free is apparently in disagreement with Google about who will pay for the interconnection between them. Not sure if they wanted to put pressure or they were just overwhelmed by traffic, but it seems they decided to cap all end-user connections to Google services, resulting in apparent Denials of Service.

As a home user the equation is quite simple: there are several ISPs but there is just one Google. If I cannot access Google I am not getting my 30 euros worth — and do not get me started on Net neutrality.

After about 10 years I decided to end my last contract with Free.

There have been countless network issues in the past on my Free DSL line but so far I have always decided to cut them some slack. Running an ISP is no easy task and I can survive with no Internet at home for a couple of days. Consciously damaging your own network against your own customers and deciding to leave them in the dark is far beyond anything I can tolerate. Again: there are other providers now, why bother?

What happens next? For me: hopefully better connectivity from an ISP. For Free? I have no idea. They already spent massive R&D over their box, do they have anything left to focus back on their initial business? Not sure they even realize they have just shot themselves in the foot. The French twitter scene was on fire the past two days over the Google denial of service. How much longer do they have until they cannot go back?

Written by nicolas314

Wednesday 10 October 2012 at 12:44 am

Windows 7 network disconnections

with one comment

Problem: I have a Dell laptop under Windows 7 connected to a GBit LAN. The network link keeps disconnecting as soon as I leave the machine unattended for more than 10-15 minutes. This breaks down all connected network shares, running downloads, connected IM sessions. It also happens during Webex conf calls, Skype sessions, or any other activity during which I am actively using the computer but not touching any input device.

Re-connecting to the network takes about 30-60 seconds, time enough for the other connecting party to leave the conf call wondering why I shut them down.

After about a month of trial-and-error it seems I finally found a working solution, documented below in hope it might be useful to somebody else.

Attempt #1: change energy settings

If you leave a Windows box unattended for long enough everything shuts down on its own, especially on laptops. Editing power saving settings seemed like a first thing to do.

    Control Panel
        Hardware and Sound
            Power Options
                Edit Plan Settings

Switching to “Never go to sleep” did not have the intended effect: the machine effectively stayed awake but network was still lost. The only way I found to maintain Webex sessions alive was to random-click around in the browser to call up web pages and cause network activity.

Attempt #2: blame the network

Asked network admins about potential issues: seemed I was the only one suffering from random disconnections. Back to Windows.

Attempt #3: change network adapter settings

A copious amount of googling unearthed the fact that no matter what you choose for Power Plan Settings, a network card can still be turned off by the OS when it decides to. The only way to change that is to modify power management settings directly on network interfaces:

Open Network and Sharing Center
    Local Area Connection
        Properties
            Configure (top-right)
                Power Management
                    [ ] Allow the computer to turn off this device to save power

Still no joy.

Attempt #4: update network card drivers

This is Windows after all: the first thing you do when you have issues is reboot the machine and if it was not fixed, update all your drivers.

Dell laptops have interesting stories to tell about drivers: there are the ones you find on Microsoft through Windows Update, and a whole other bunch available on Dell’s web site. That is: if you are patient enough to click through millions of pages designed as an incredible maze of incomprehensible references, dead links and serial numbers.

According to Windows all card drivers are up-to-date, which pushed me onto Dell’s web site for further software. Abandon all hope, ye who enters here.

Attempt #5: look for dedicated software from Dell

Couple of hours spent on the site entering serial numbers, downloading files named like A12017402941.exe in large quantities, installing them, rebooting the machine, to no avail. The laptop got a boatload of crapware installed but network still failed after a random interval between 10 and 15 minutes.

Attempt #6: mess up network settings

Turned off IPv6, switched the address to fixed vs DHCP, changed DNS servers, started/stopped Internet sharing, all of the above one by one and then together. These had no effect whatsoever.

Attempt #7: keep pinging

Will it change with constant network activity? I wrote a short Python script to ping a remote server every ten seconds and left it to run in background. Still no joy. It seemed the only way to keep things alive was to animate input devices, and I refused to resort to Fischer-Price technologies.

Attempt #8: fix autodisconnect in registry

More googling, more information about the mysteries of Windows network configuration. Found on Microsoft support site: fire up regedit and change the key in

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\lanmanserver\parameters

Key name is autodisconnect, set by default to 15 minutes. Changed it to 65535 (0000ffff) seconds. It can also be temporarily changed from command-line by issuing:

net config server /autodisconnect:-1

It kinda worked for a couple of hours, and then not. Back to square one.

Attempt #9: voodoo

Still found on Google: somebody reported getting less trouble by changing a network card parameter you would normally never touch. Why not?

Open Network and Sharing Center
    Local Area Connection
        Properties
            Configure
                Advanced
                    Property: Link Speed & Duplex

Change value from Auto Negotiation to something corresponding to your LAN capabilities, e.g. 1.0 Gbps Full Duplex for a Gbit local network.

The network stopped disconnecting at that point. I have absolutely no idea which parameter or combination thereof changed this behaviour, but I can also safely say I do not give a damn as long as it works.

Written by nicolas314

Tuesday 22 May 2012 at 11:35 pm

C++ quotes

leave a comment »

Best C++ quotes ever: C++ is good for the Economy, it creates jobs!

I like the proposed alternatives:

  • C
  • Go
  • Throwing yourself in an active volcano

Written by nicolas314

Tuesday 22 May 2012 at 11:32 am

Posted in fun, programming

Tagged with , ,

Go recipe: 3DES

leave a comment »

Just posted a really basic example of 3DES encryption with Go. Check it out on github: https://github.com/nicolas314/go-recipes

See godes.go

Can’t remember where I got the test vectors from, probably an RFC. Did not invent them myself.

Written by nicolas314

Thursday 10 May 2012 at 7:39 pm

Posted in go, programming

Tagged with , , , ,