OpenSource Working Group

Wednesday, 5 November, 2014, at 9 a.m.:

ONDREJ FILIP: Good morning, ladies and gentlemen. Welcome to the OpenSource Working Group, thank you that you came, that so many you came after yesterday's party. I think it's quite nice and I am glad that you didn't drink too much and were able to come. Welcome. I am together with Martin Winter, we are Chairs of this Working Group. So, again, welcome. And Martin will show what is today's agenda and we will start. Martin.

MARTIN WINTER: So good morning, everyone. So, we have first a little bit administrative, then we have a few lightning talks again, we split it up, we have one at the very beginning because Sara, who is giving it, she has to leave early and we have the main thing about using OpenSource to develop OpenSource from a colleague of this Ondrej on the survey, and we have tomorrow as talking about ExaBGP about the history how it got and all that stuff. And then we have a few more lightning talks at the end, which is basically the Lua policy engine, BIRD update and quick Quagga update.

So, without further delay, I am happy to see that we have RIPE NCC, a nice person doing the scribe, I forgot his name again. Anand, thank you.

From agenda, any last minute objections, suggestions, things? We also have ‑‑ actually it doesn't show it here. There should be one more talk at the bottom, maybe it shows it on your slides, but we have at the very end, we have a discussion about Working Group Chairs and stuff, too. There is ‑‑ all the Working Group Chairs there is a clear procedure how getting Chairs re‑elected and like new ones elected and all that thing, so we have a few ideas which we want to bring up for discussion at the end. We are not planning to make the final decision today because we wanted to see also what other Working Group come up with the ideas and we can have the final decision on the mailing list but we wanted to present some ideas, listen to what ideas, feelings, you have. So that is at the very end.

From the minutes from the previous Working Group, you should have seen them on the mailing list. You can also find them on the archive. I haven't seen any really much any questions or something or much feedback, so I assume that is all approved or anyone has anything to add there?

OK. No.

So then lets start with Sara. She can talk about the Hedgehog.

SARA DICKINSON: Hello, good morning. First of all, thank you very much to the Chairs for accommodating me timewise as I have to head straight out and catch a flight to high would he, life is hard. I am here to talk about the Hedgehog project. I happen to be the one presenting this but what you will see at the bottom is a very generic nod to the ICANN DNS Ops team and so that is to members both past and present who have been involved in the evolution of Hedgehog.

So what is Hedgehog? Well, it's alternative presenter software for the DNS traffic statistics that you can collect with the DSC collector, so DSC has been a tool around for a while, collecting component and presenter component and Hedgehog is an alternative to just that presenter component. For those of you who were in Athens at this time last year, you may remember that Dave Knight from ICANN did a presentation on Hedgehog in the DNS Ops Working Group, which was about the time ICANN was starting to use this internally. So Hedgehog was developed during last year, for ICANN by Sinodun and the reason that ICANN started looking at this was because L root is quite interesting in how it's deployed. It has hundreds of nodes deployed, if memory serves me correctly last year it peaked at something 340 nodes, I think it's back down at the high 200s at the moment. But what they found is, with that sheer quantity of nodes and data, that the DSC presenter just didn't scale up to that, and they were also at the same time starting to think about wanting a tool that they could add some additional capabilities into, for example in terms of statistical analysis of the data. So, Hedgehog was developed last year. They put it into production and late last year it actually replaced the DSC public interface to their data, and they are using the 1.X version and you can see it life at that link but in the hope of stopping roomful of people clicking at the same time, I have a screen grab of it. This is what Hedgehog looks like. Up at the top you have time controls, you get some basic ones by default, you can go and more ground control. At the top are all the data sets that you can access which should look familiar from DSC. You have the graph. This is what is called an interactive graph, it's based on Google Whois chart, you can spin your curver over its C values, use this time slider at the bottom to narrow down your time window and draw right down, your small Windows of data. And then at the bottom are all the nodes collected for our grouping mechanism it works on a paradigm where you select all of your parameters and you hit the generate POP button as opposed to the other way of doing it which is that every time you change a parameter the graph will generate automatically. This is an alternative graph format we have. This is produced by GG plot and this is something that lends itself much better to going into reports rather than Google viz ones which have some limitations of them in some ways and you can do log and stacking in this mode as well.

A couple more things to say is that we have an importer writ then C++ postgress data database. The web interface uses R, one other important thing to say is that recently the RSAC committee published a report about the metrics they suggest are collected, there is six and four are traffic related. With the right configuration Hedgehog can generate graphs for those and can also produce the yamal files which are recommended in that report. So final slide: So what is the OpenSource status?

Well, the good folks at ICANN as promised, have OpenSourced this now so as of September, we have OpenSource what we are calling a 2.0 release because there are some significant changes compared to the first release. It's using an apatchy licence. It's been released under the DNS stats dot‑org organisation so that it's stand alone rather than being associated directly with any institution so we have a website where there is all the user guide and installation guides for it, there is a mailing list which is just getting off the ground. Coders in Git hub and we are working in production and release and the active development is under discussion right now, if you have feedback, we would love to hear from you. Thank you very much, that is everything.

MARTIN WINTER: Any quick questions?

AUDIENCE SPEAKER: Cz.nic. Could you do a flashless version?

SARA DICKINSON: The 1.0 which is on the ICANN website requires flash because that is what the Google charts need. There is now a flashless version of Google charts which is pure JavaScript, the 2.0 version of Hedgehog supports that. There is a bug in it with the wrapping of the legend so it's not the default but it's configuration option as to which one you can use. Sorry about that.

MARTIN WINTER: OK, thank you, Sara.

As next speaker, we have Ondrej Surrey from cz.nic.

ONDREJ SURY: Hello. And this is a presentation of what OpenSource we use at cz.nic and mainly for Knot DNS development. All examples in this presentation are from our Git master branch so please don't beat me if there are some errors and I hope I will not embarrass too much our developers.

So, just a short summary, you might know some of the tools I am going to speak about and I would like to hear more if you have any suggestions what to use and an at the end of the presentation. So we have something called GitlLab, we will talk about that a bit later but there is code coverage, static code analysis and complexity.

So here is the premises I use in this presentation. You see no clouds. So what is GitlLab: GitlLab is GitHub like service, I am sure you know that. And GitlLab is similar service that you can use at your premises, it's Ruby or Rails because that is a new pH B now. And it's a web interface to Git, that is one of the functions. It also contains lightweight issue tracker, WIKI and it can do public, internal and private projects for you.

What we use GitlLab for, we use it for code review. Basically, it cunt enforce the process. You might know Garret or stuff like that, it reinforces and you can't ‑‑ you must go for code review. GitlLab doesn't enforce that, but I am sure that, well, you can force your developers to just speaking to them, that they use, that they will use this. Basically, branching is very cheap in Git, so we put every new code in branch, and then for that branch when you want to merge it into the master branch we create merge request and it integrate with continuous integration and when it builds successfully look at the code and say, hey, well, I don't like this, could you repair that, or yeah, that looks good, it can go in.

I am not sure if you can see this but at the end of the presentation there are links for all of those things I am go to speak about. We have open GitlLab for cz.nic labs, we have basically open all the tools I am going to speak about because well, I don't really believe in privacy in the OpenSource.

So, but this is the example of one of the mergers we have right now in ‑‑ it fixes some error code. And here we can see that it builds successfully. So when some other developer will come in, he can review the code and merge it into the master branch.

Continuous integration. Probably must known OpenSource continuous integration service is Jenkins. You can define environment there, build slaves on different platforms. We have BSDs, we have different Linux distributions, we have mech co‑S next in this environment, how to build a project and what slides you want to build them and you define triggers, if it should be built on every commit into the Git, every Sunday or every successful build of other project. It's quite powerful. Unfortunately, it's in Java. But I am sure you can get around that.

This is the main page of the Jenkins, again it is our Jenkins. We have quite a lot of projects defined there for different conditions, and we also built static web pages from there and stuff like that.

Then, GitlLab has its own continuous integration serve you but only for ruby projects. We wrote our own thin layer on top that em lates the GitlLab server. Can /T* can start a new build for every merge request. Again at the end of the presentation there is a link for this project. And basically, it reports back if the ‑‑ into the marriager request. So you can integrate the GitlLab and Jenkins service together with this thin layer.

Next thing: Code coverage. You do have tests, right, for everything, well, you you do. But how do you know how much code is covered by test? There is a standard part of GNUCC it's called ‑‑ some flags that you add to your project to enable them. And then you can generate the flags graphical output from that. I will show it in a moment.

This is current master of Knot DNS. As you can see, we are not perfect. I don't think anybody is. But this includes both unit test and functional test. You can do that, and some parts are really covered, some are less. For example, it's very hard to ‑‑ this is scanner of the zone files and it's quite hard to right test the build. Analyse every line of the generated parcel because it's not even the cc code, it's ‑‑ that is generated into C and this just counts if, well, this is covered by line by line, and it's very hard to write such tests.

Statistical analysis. Again, I think the clang analyser is pretty standard, it's part of the LLVM suite, and it analyses code as part of the compilation, again it has plug ins to integrate well with the Jenkins because you don't want to run it yourself every time, you want to automate that. And then there is CPP check which is very similar but it's only statistical analysis of the code and again there is plug ins to integrate this with the Jenkins. And the last thing we are not yet using but as an option we could use that, is OC Lint, again static code analysis, I will show you this a bit later, but it's very straight and reports a lot of stuff.

This is a summary for the clang analyser. It will just, well, show you like the summary with what type of bugs are found and here is an example of the, list of the bugs it found, for example, there is some stuff, but it's master branch, so it's expected. And some of test are in test with expected and here is some example report, here it found that if you take a false branch here it will just return from the function and it will leave some memories, so it probably needs to be fixed. Or it can never happen, that is the other option.

That is something you will want to look at if you are a developer and it finds such a bug.

Here are the report from CPP check. Again, this is from Jenkins, it's current master and it has a lot of style comments about style. Again it's very pick eon what it likes and what it doesn't. For example, here is, again, the list of the bugs, and most of these is the scope of the variable can be ready used. It's a new feature in C99 and C11 that you can define if variables down there so it, well, it shows you that you can reduce the scope of a variable so the code is more readable.

For the code complexity tools, I already told you about the OCLint. This is something we are looking into, and it finds all kinds of possible bugs, unused code, complicated code, redundant code and all other stuff. Here is the, it's from this morning, it's available on‑line. You can link what stuff it finds. For example, it complains about long variable name, but again, this is, I think this is from generated code, but it's very interesting what kind of stuff you can find in the code if you use those tools.

So, what is the psych lowmatic complexity, it's a software metric that counts how many times the programme ‑‑ how many paths are in your functions, so if you have complex functions that can take different paths then well, it uses this measurement to point it out that you should probably simplify the functions. There is another tool called Lizard, it's a very simple tool and doesn't include the headers of the C code but just do the job just quite fine. And again, this is the example of what is this morning Lizard run gave us, and as you can see, the most complex function is way above all other things and it's the generate scanner from the Rachel. This can ‑‑ used really. It's really huge.

OK, so here are the resources. I put links for the, well, the source but you can look at the the cz.nic GitlLab and Jenkins, all the stuff is there. Here is the GitlLab 2 Jenkins integration plug inif you want to run it yourself, and the rest is just ordinary slides for those projects. OK. That is the end of my talk.


AUDIENCE SPEAKER: Hello. Not really a question but a comment. I loved your talk, I think it's really good that you do all of this not in the clouds but some projects don't the capacity to do so‑so for that I would like to suggest GitHub, Travis and Coverity to cover a lot of same needs because everybody should do this whether they have money or not.

ONDREJ SURY: True. But I wanted to talk just about OpenSource.

AUDIENCE SPEAKER: Well, yes, sure, it's still OpenSource but the tools you are using they are closed, right.

ONDREJ SURY: Thank you. There is cover alls that I have for coverage.

MARTIN WINTER: If there are no more questions, thank you, Ondrej.

So next speaker up is Thomas manage I think about ExaBGP.

THOMAS MANGIN: Good morning. Now you are awake, I am sure of it. Well, I speak about ExaBGP, I think the title says exactly what the slide we cover, how it was created, how we got there, how it will be used and what next and the part which is how you can contribute.

So, ExaBGP was started as I would say side project in 2009. From the start it was released under BSD licensed to make it as easy to use for everyone so could integrate it by any networks, wanted to use it without any fee licensing. Through the years, it if you look at the GitHub committee you will see that twice in the code, the worker was rewritten from scratch. It's now quite well, quite mature, even, it's not I would say very non‑software and used ‑‑ sometime in place I didn't expect.

I would like to explain what is ExaBGP because it's a BGP Daemon and most people think thank you for the T‑shirt. And ExaBGP is not a BGP Daemon, it's always to be a tool for people needing to access BGP to do things with BGP but don't need to do any local forwarding. So ExaBGP didn't do, there is no such feature. It's very good at passing and generating.

So, when ‑‑ the first one BGP for myself because when I started I had no BGP experience as a programmer but I wanted to test it more mature to write some async code make scaleable, using some bugs and ‑‑ which are kind of way to write code. But mostly I wanted to make sure the Cody was easy to understand and maintain. I didn't write a code to be complex. I tried to make sure everyone could read it, jump into it and modify it, the goal was to modify and add features to it.

The code, like every project which mature over five years is quite large, there is many files and I am quite sure everyone wanted to jump to it and found it a bit complex. Writing even is not very obvious but the word path of the BGP passing library is quite self‑contained and has been used in other BGP projects. Orange put them on the they could use it without any change or minor changes.

So ExaBGP is a BGP gateway, a tool for you you to use your back end tools or whether you have as business intelligence to be able to speak to your BGP code and do changes in your network in an easy manner. To do so, there is a text API which means you can send using text and get from JSON or text.

So, the first release to get through very quickly through the history because it's interesting to know where feature come from. The first thing is route injector to allow my business to do HA between the centres without building any Layer 2 networks, so not two places. Quite quickly then everyone told me you should have IPv6. Everyone we spoke with long enough we knew that you have to do it otherwise you will just be on your back forever.

The next thing I found this new spec at a time which was flow spec, which allowed to send via some firewall like BGP update which allowed to use your core as a firewall so if you used at the time routers you could firewall traffic using BGP which for us was practical so we done it.

It's, at this point there was no ‑‑ except to like every software you write a config, you send a SIG HUB to reload and I found that extremely unpracticable for controlling a core network, just configuration change was practical. As I was a squid user I found quite a nice way to allow third party application to be controlled squid, and I just took the idea.

So, ExaBGP can be controlled using PIPE but it means it will read once the programme prints and then will, can get ‑‑ can print back what the network have done, for example, if you are getting an update from the network ExaBGP will pass it and power the packet, make sure it becomes on a clear text format, root was announced with that prefix, and just send it to the input of your programme so you can read if it was like a keyboard. If ExaBGP want to announce you can write we send it out. You can write programme in shell, in Python, in whatever you want. Every language will work. There is 2020 IFCs which have been implemented. It allows to restart ExaBGP without flapping roots so you can announce a route from server and reboot and still not have a black hole, a lot of them implemented. So probably if you have a use case for it the code you need are probably already here.

And as I said when I started it was route injector to inject root. Now you can use it as well to connect other BGP Daemon to it and it is doesn't have to be connection out.

The use case for ExaBGP are often and the first large deployment or use case I saw for the programme was to block DDoS. Many ISPs are under attack, they are NTP, amplification attack and to stop them use wanting to Tuesday to drop the traffic at the edge. So that is still very large use of it. There is quite a good presentation on‑line which exists for people who done it, we show you thousand do it from A to Z and what you need to do is generate from the statistics, from your flow information, what IP are attacking or what is attacked.

The second big use I saw after that is people doing the same as I did which was service routing/failover. Two very good talks on‑line which explains that. The first one from Vincent Bernard was used with root service to be able to control and failover services. He wrote an application which is now mainline into the programme and can be used, so I would say I really let you rate the blog because it's quite a long blog and he explains it much better than I could now. And the second one is a demonstration how you can use BGP to end we will see Co. routers to not use the balancers. Using the fact that modern routers can do very stable flows and therefore you can spread packets with multiple service. So, this is again talk if you can read it. If you have any questions you can contact me.

I saw some very, very scary use of ExaBGP over the years. I think the last one is the one which gave me the most sweats, when large network told me they wanted to run programmes on network and sent 200,000 routes every five minutes and programme the network to optimise instead of for normal routing which means at the end their upstream ‑‑ but in order they ‑‑ crunch ‑‑ finding best path with a way which is not normal and using BGP. I spend quite some days making sure it worked, we found quite a few bugs in the process but it's now it's live and it probably testament that that application can be used for large scale usage.

The thing you can do as well which is it can be used for doing things like looking glass. One of my co‑workers Daniel if you connect ExaBGP to all of your iBGP speakers can let you graph the root to a prefix and see how your network. We can be used ‑‑ the good thing is tool is database you don't need to have live working network to see how the thing works so you can check that even if it is down. You can play with it.

And RIPE have been experimenting to use it to provide RIS and if you have any questions, I will send them back to the RIPE NCC.

The path I care about most. But it's normal OpenSource project. I am many one of contribution, asking for features and coming and going, it's all very nice, it keeps me busy and motivated, with feedback from the community you ‑‑ I had a few bug reporters and I tend to be ‑‑ if you look at the time, it's all very short, matter of hours, all very long, a matter of oh, crap, it's large job. Most of the time I tend to be quite responsive.

The source is all on‑line. You can access it the development branch are on‑line as well so the code is there and if you want to do anything new or develop, take my tree because I will probably want to test it myself before pushing it the live.

Up to now ExaBGP had been only at one release, both new feature and bug fixes which means you get the latest bugs but fix you add the new bugs from the new features. As it's now used in production in many places, I am going to stabilise which will stay as it is. One of the big reasons when 4.0 is released I want to Word formatting which will probably break backward compatibility and I don't want to upset people, I have invested and use it now don't have to change all the developer.

Currently what is cooking, I promised version 7 in six month to Paulo, I will find a way otherwise you will get upset. I was given some PCAP yesterday, thanks, RIPE for that. I hope to be able to open that in the next few days. There is some code as well in the master branch which you shouldn't expect but as I said when the guy from Orange done the job he implemented VVPN support, using an old version of the code, cleaned up there. It's all there, now he needs a format to be able to use it but all the back engine is there. What is on the wish list as well, it's a ‑‑ if you CLI, if you want to know what is happening you must look at the output of the Daemon, so I intend to write a CLI and once that is done we are looking at making route server with it. How you can do crazy things with BGP using it like http on the ‑‑ once you have the right tools.

Software quality of the ExaBGP. One check that everything is as it should, what I generated, what I pass in effect, I can add my own updates. One check things with previously stored data, in effect there is no regulation on what it generates. And I can make BGP speak to itself making sure that I can test condition. I would like to have more of those. Finding new features is more interesting so you should care about that and care about your features. I am just joking.

Obviously, it could be better, could have more tests, that is the part where I think it's easy. Unitest has not kept pace, I just don't end‑to‑end testing. I will need to work on that. And if you have large deployment, I will, I will say you lab it first to find the limits. What if I have 200 peers sending, it's large deployment, I cannot tell you, I have 20 peers on my network, if you want to do 200, please have a test. But up to now most people which have been used have been happy so it's quite good.

Documentation, that is the one thing really which is a big problem. Most people look at the software, found it quite hard to take control of. And I am already missing documentation both on WIKI help and it's one area where I appreciate if people to help, simply provide back what they have done so I can try to make the WIKI slightly better.

I will go very quickly through the code because I think it's interesting, but it's something if you are interested, I will take you off‑line. It's use ‑‑ you like programming come and speak with me. If you are old enough to remember, it works quite well. One could you telling per peer so ‑‑ one function which makes the code quite easy to understand because there is one path.

The code is always organised, I recently look at code from other programmer like which is very nice code as well if you want to look at it. It's one flat file. All the BGP code is flat file. In my case I try to break things and so you have message, open, and so on and so on. So, if you want to understand how things are, things, the structure is quite easy and it's quite easy to dig and find the right class.

And as I said, it's a personal project. I have many other activities in my life. I tend to do that when I can, often being late at night or during working hours because I am lucky to work for myself and have my own business, I tend to do that when people ask me for, I will take the time during office hours to help. If you want to help, please do, and I think I will leave you with this slide, which is if you like BGP, how to contact me and if you want to know about ExaBGP presentation. And I will take your questions.

MARTIN WINTER: OK, thank you. Questions. I also want to mention there is a second talk Thomas is giving on the routing Working Group after, I think after 4:00 Thursday afternoon which is the technical questions about what is going on and the features and stuff.

THOMAS MANGIN: The second one is more to show what you can do with it. More for operators who want to do BGP stuff, what can be done with it.

AUDIENCE SPEAKER: RIPE NCC. First of all, thank you for the project, as you mentioned we are ‑‑ we have a new version of RIS collectors which uses ExaBGP and we will start deploying them from before end of the year. And we will give feedback to this Working Group on how it works because we will have a higher number of peerings than 20. There is one question I have: Do you have any plants to implement RPKI RTR or 6810 because it can be very useful for analysis?

THOMAS MANGIN: I didn't personally don't myself, I welcome any contribution for it. RPKI are not my cup of tea so I really don't think I would be the best person to do it but anyone want to go do it and needing help I will welcome the effort and give all the support they need to make sure they can do it successfully.


MARTIN WINTER: Any other questions? No. OK. Thank you.

So we have a few short lightning talks, I think the first one is about the Lua Policy Engine. Peter van Dijk.

PETER VAN DIJK: Thank you. Thank you for having me. I work at PowerDNS, one of the popular OpenSource name servers, and today I will tell you something about our Lua Policy Engine. This talk has a few goals, one is to promote this feature, one is to promote in general the use of embedded description languages because I often see people where something that already exists would have done. And third, I think what we are doing here might apply directly to other software, perhaps to other name servers.

So the context for this is DNS amplification attacks, I am sure all of you are familiar with those, I hope so, a reflection attacks. The one solution for that was proposed a few years ago by Paul Vixie and Jacob and bind NSD and knot have implemented this and I understand it works well. But there are competing proposals as well. And we figure that attackers are going to change, so software needs to adapt and you cannot not for a developer to patch and wait for Daemon to patch etc., we wanted to do this slightly more dynamically. The idea is that you get enough rope to hang yourself or to support the original RRL proposal to deal well with the various layers inside PowerDNS, I guess this is a PowerDNS‑specific problem and most importantly we wanted administrators to be able to update things whenever they want without restarting Daemons, etc..

What you do? You tell your PowerDNS when you start it up to load certain Lua scripts, inside this you will have a function called Police, the first is the request it's always set; the second is the response, might be set because we also call this function when the request comes in; and the third parameter tells you whether this query came in over TCP. You look at the stuff you have got, you interrogate it and you make a decision and PowerDNS respects this decision.

On the two objects request and response objects, they are of the same type. You can call various things, you can get the question name, the question type, the remote hosts, the next two are a bit special, directly from the RRL spec, because that says if you expand the wealth cards do not take the extended name for your rate limiting and use the original name, the actual star, similarly when you send out an error do not use the QNAME but use the zone name that you found the entry in. Furthermore, you can look at size, RRL, I am not actually sure RRL uses this but you may want to think about amplification factors, your question is 40, 50 bytes your response will likely to be bigger, you may want to keep in much how much traffic you are sending back in reflection attack. Finally you can take the R code and the resource records account into account.

This is very simple example, this actually works. If there is a response set, so if this function is being called right before sending out announcer and if a wild card was extended, send a truncate instead, this is a very short example of the simple things you can do.

This feature is still in development. One of the things we are go to, a few Lua implementation of the original RRL proposal and this is the first few lines of the current version. Anyone who has used RRL will likely recognise some of this.

And it was mentioned earlier, RRL wants you to use the non‑expended wild cards to do your rate limiting on. So if a wild card has been extended then instead of the QNAME it takes the original wild card name. If an error was sent like name error, it will take the zone name instead, and you see the token as defined in RRL standards, which you just con /PWAOEUPB so we can do rate limiting on it. You can talk to your script, PowerDNS has a control CLI tool and the policy C MD function inside the script. In this case you go get, this is a snippet from an example script, does rate measurements at least and it returns the current QPS rates, for example, similarly, through the same mechanism, your script might take command from outside, you may have a DSC run in certain things and got data from that and define policy based on that. You can get that into your Daemon in realtime, react to any kind of attack if necessary.

Finally, powerness exports a lot of metrics like other Daemons do and Google allow a script to also report some matrix so you can graph these easily with the graph you have set up. This feature was sponsored by ‑‑ the slides, documentation and code are in this URL. Thank you.

MARTIN WINTER: Thank you, Peter. Questions? It's difficult to see in this room. Okay I see nobody with a question. Thank you, Peter.

The next talk is Ondrej, he will give us a quick update on BIRD, what is going on there. He has very nice T‑shirt you have probably already seen, but he doesn't wear one himself.

ONDREJ FILIP: I feel to be shamed by Thomas because I couldn't wear a T‑shirt of ExaBGP but hopefully he will make some next time and I will be able to present in the T‑shirt. Hello again, I have really just a brief update about the current BIRD development. Mainly because you do some changes, we are just before quite big leap for BIRD so we will do many experimental branches, so I just wanted to explain you guys how this is going to work.

So, the current version is 1.4.5, the whole 1.4 X branch mainly added BFD protocol and many features like BGP add‑path and graceful restart. The reason for these changes is mainly that the current, the biggest supporters and also biggest users or at least the heaviest users are internal exchange points so they really focus on BGP in and use birth as route server so we were working on BGP quite hard. Currently, you know, at least according to U RX statistics, BIRD is used more than 50% in route servers and in terms of exchange points, so really quite a big responsibility for us because if we make some back, all of them would break, it would be really, really visible in the current Internet flows.

So we slowed down the development and we are focusing on testing, so this is very well tested release and if your internal exchange point, I think this will be for some period a good for you and you are probably not really interested in the other branches. So 1.4, I don't say that there can't be 1.4 .6 or something but definitely 1.4 release will be good for a while for Internet Exchange points.

What is going to happen probably today, if I will save some time today, we will release a pre or release candidate version of 1.5.0. The main reason for this version is that we did a huge OSPF redesign, the whole product was changed actually. We fixed many, I wouldn't say bugs, but things that were not really compliant, which turned out they were tiny problems, which actually took a huge redesign to fix, so OSPF was really changed and we added the OSPF extension, not a big deal if you don't know what that means, actually. But OSPF was redesigned and we would like to release, as the change is really major, we will not see the full version yet; we will just release a pre‑version and I would kindly ask you for testing, especially if you are guys who play with OSPF, please help us with that.

We don't expect to ship it to distributions, so you need to comply ‑‑ compile by yourself, so sorry for that.

After the testing, the final 1.5.0 will be released so I expect it soon, probably by the end of the month, maybe earlier. And 1.5 will be the last sort of minor branch of 1 XXX so this is going to be the last version of BIRD as it looks as it is, because all those changes we do have for future version which is called 2, and this version will be a little bit strange. The main change will be we will integrate IPv4 and IPv6, which was one of the weaknesses of BIRD, so, you know, we could fully support multiple BGP and stuff like that and mainly, and mainly IS‑IS because this protocol was hard to implement in current architecture of BIRD. So that is going to be version 2. And this version will be experimental. We will not guarantee we will not change the configuration style and sufficient like that so definitely not a version for you if you are using BIRD in production networks, just to warn you in advance, version 2 will be a little bit experimental, not in terms of stability but we will probably a little bit tuned to approximate configuration style to have some output.

So, we expect to have some, you know, release next year, beginning next year and we will play with it like six months or something, so we believe the stablisation will happen by in Q3, third ‑‑ Q32015. And after that we will probably release a new version, stable one, as in version one, no big changes. Stable configuration, just incremental updates, nothing big. So that is actually the future.

And last thing I would like to talk about and I was a little bit encouraged by Jeff Osborn's speech, actually, in the plenary, is how we work, and what are we working now. Currently, you know, BIRD ‑‑ BIRD has been supported by cz.nic for a very long time but now we reached a phase where we have to be self sustainable because that is the long‑term cz.nic transition to have all OpenSource projects that are sustainable. So, currently we are self sustained but we have a lot of supporters which pays our bills, so thank you very much all of you who support BIRD.

As you probably notice, we slowed down the development, it's not because we don't have enough manpower but the main reason is as I said, that we have to be really responsible with our release as BIRD is now really huge, vitally deployed and one guy told me that when ‑‑ they lost one‑third of traffic in exchange point, so back in this software would be, could be really disaster for Internet. So we really test properly. But to get speeds again, we plan to expand our team, actually we are hiring new people and again that is just because there are supporters, there are companies, that are paying for the OpenSource and I am really thankful for them. And also as Geoff mentioned, we have every release has thousands of downloads and there is a lot of instances that, you know, by some distribution and again the numbers supporter is about four ‑‑ slower, so the same thing as you mentioned. So if you want to support OpenSource, talk to us and again, thank you very much for those of you who help us. Thank you, that is all.

MARTIN WINTER: Thank you, Ondrej. Any questions? Then I have a quick question. I notice how you mentioned more RFC compliance like in 1.5 and I was wondering how you achieved that from the testing point of view?

ONDREJ FILIP: You mean ‑‑

MARTIN WINTER: I was wondering how you do verify the compliance in like the ‑‑ if you have any specific tools which you developed or using some ‑‑

ONDREJ FILIP: Yeah, actually, you know, we don't have many formal methods, what we did, we built an experimental network which changes often and, you know, we use, for example, ExaBGP as well for testing BIRD, trying to flood many updates and stuff like that. So we don't have any formal method; we just really use it in a network which is very badly and designed ‑‑ badly built, and before every release we, you know, send some BIRDs to some nodes that use for Beta testing like AS110012, Anycast nodes and stuff like that. We don't have really any formal method. We would love to have any, but it's quite complicated area and many bugs in the software, you know, just appears one in a month or something like that so it's really hard to debug and it's not easy to test it in some, as I said, in some scientific matter or something like that.

MARTIN WINTER: OK, thank you, Ondrej. Next up is Paul Jakma, he is a colleague of mine and will talk about Quagga, a quick update what is going on there.

PAUL JAKMA: Here to talk to us about Quagga, just a quick status update. So, the OpenSource routing project which used to be ISC and that has moved to net Def, another US nonprofit, and its remit is to support OpenSource routing generally. It's supporting Quagga maintainers, also do testing of routing protocols, compliance testing and I think Martin does a lot of that. There are reports available in OpenSource routing dot‑org at the moment for Quagga. But I think Martin is probably open to testing other things as well, so if you have other OpenSource routing protocols, I don't know, talk to Martin, maybe.

So some of the existing stuff, there is regular compliance test reports put on website. I think Martin ran with them just a few weeks ago, a month ago.

So, I suppose everyone here is mostly interested in BGP. Things that run into recent releases the last kind of three releases covering last year or so, the GTS NTTL security which I think Nick Hilliard did a couple of years ago, that has been extended. Recursive route support which has been a bugbear hopefully works a bit better and a lot of ISP, one big one requires recursive routes for BGP Ops.

We are seeing a lot more networking vendors and multinational telecoms working on Quagga, which is nice. And there has been many bug fixes in the last few releases and we have got another release coming soon.

Processes for Quagga have been streamed lined a bit, we have tried to make it more transparent. David lamb part has been working on that. We have brought in patchwork to try and track patches because you have so many contributions, some are easy to integrate and some not, it takes time. It used to be a problem, these things would get lost. We have patchwork to track that and it's quite a nice tool, it will scan your mailing list and automatically find things that are patches and keep track of them. People can reply with reviewed by and patchwork will pick these things up. I think there is a lot more streamlining of the processes and particularly devolving when the ‑‑ devolving acceptance of patches to the community so that is more transparent and less opaque.

So, I think we have a minor bug fix release imminent soon. I think for some BGP, some BGP bugs that were caught by testing. PIM SS M Daemon which we are going to integrate soon. There is an RPKI routing publically infrastructure out there and patches to integrate that into Quagga, but I think that work is also potentially applicable to other OpenSource Daemons, it's a C library, I think somebody hooked that up into the route support map for Quagga. I think it's more generally usable. There is a few vendors with long, long streams of fixes and we are working on going through that and synchronizing them. In the long run we need to get BGP add‑path for receptors and route servers and there is performance scalability work looking at what is out there and possibly taking some other approaches as well. I think bits of the BGP peer machine probably can be run in a process as well as some of the CLI, so we need to look at that.

After that, it's wish list and comments from the floor, please?

MARTIN WINTER: Any questions

NICK HILLIARD: From INEX. There were a whole pile of patches posted by Cumulus networks a few days ago in their own patchwork tree, one of them is particularly interesting and that is a fix for the BGP peer ‑‑ negotiation mechanism. This is a problem that, a persistent problem that we have at INEX that there is some, whatever we restart Quagga we just can't establish connections with a small number of peers. Do you think that this could be prioritized? I had a chat with the Cumulus people about it and it turns out that that patch is quite of dependent on a stack of other patches. So maybe it's not very trivial to implement.

PAUL JAKMA: You are saying the patch is already there?

NICK HILLIARD: The patch is already there, but it's dependent on a huge number of other patches which need to be reviewed and committed before it will work.

PAUL JAKMA: The goal is to get that stuff reviewed and mapped direct and integrated as soon as possible. If you want to talk to me off‑line and tell me which patches they are


AUDIENCE SPEAKER: Hi, Nat, today with my Cumulus hat on. All on OSS dot Cumulus, we have put them on to GitHub so you can build all our patches there straight into ‑‑ binary straight away. And the thing is you can take the debs that we install on X86 system. We would love to help try and get these patches back in quicker.

PAUL JAKMA: I have been talking to Danesh.

MARTIN WINTER: OK. Any more questions? No. No. OK. Thank you, Paul.


So we have one more lightning update, which was extremely late submitted, actually, after the session started. Have a little time, though so we do NSD4 .1 update.

JAAP AKKERHUIS: Originally William was supposed to do it. My voice went ‑‑ next slide, please.

Well, NSD history, this is, this is quick where NSD came from. Use case was very little amount of static zones so it was hard to change this over the day. Notably, the root zone could get away with speed for memory and slowly the usage evolved and although the first use case is still there, lots of people want to have lots of zones and lot of dynamics change every five minutes. So how does it evolve like that. And the third one is the latest version catering to that.

So, we don't compile all the ‑‑ all the answers. And that saves us a lot of memory because you don't need to keep those answers in core and it saves the half of memory and that is a big win if you have got gigabytes or zones and ‑‑ so speed penalty 3% or something like that. Another thing we did was basically, we used to be very strict and more ‑‑ we first stored the XFR compiled on that before we serve them. Now we do it immediately when we get it because it does make a lot of sense and everybody else is doing it that way as well.

This is more or less the main features in 4.1, and for more details see the blog which came out with 4.1.

So, if there are any questions, I can take them, hopefully.


Thank you. So, we have had, at this session we also have discussion about Working Group Chair re‑election, all the things there was discussion which came up especially at the last RIPE meeting between the Working Group Chairs and we should have rules and basically, somehow, to give other people a chance to, if they think they can improve the things or to come up as a Chair, like getting elected and we tried to fitting out how to do that the best thing. So Ondrej came up with a few ideas. So he can take over here and explain it, what he thinks.

ONDREJ FILIP: Thank you very much, Martin. I apologise pretty boring stuff, I know there is a lot of techies in the room but we need to do some administrative. So what do you want to achieve? I feel a bit responsible for that because I was the member of the original gang of four who brought the recommendation to the Chairs' group about this, but basically, the Chairs' group decided that each Working Group should have rules for Chairs election, selection, re‑election, reselection, and unfortunately we are the first Working Group at this RIPE meeting so we have to start it, and I would enjoy if I could see what others have done. But we don't have that luxury. So, we discussed this very intensively, me and Martin, and we have some proposals, but at this stage we have more questions than answers. We hope we will get some feedback from you. So this is a start‑up discussion. We will collect some feedback here and we would like to continue on the mailing list. So, if there is anybody who is not on the mailing list, guys, please subscribe now, really, we would need your feedback and we would like the mailing list to grow.

So, first of all, we are discussing the number of Chairs. Currently, we are two people, from different parts of the world, quite independent of each other. So I think it's OK. And we believe that it's minimum just, you know, for situation bad happens. So we have two Chairs. We don't think that it makes sense to have more than three. We really believe so. Anyway, I think in the Chairs' group it was commented that there is too many Chairs, so again, I don't think three is ‑‑ more than three makes sense. We can discuss whether two or three ‑‑ three is better than two or not, but that is an open issue. Currently, I am feeling that two are OK but, again, this is your Working Group so your feedback is the most important.

Then, how long should each Chair serve. We feel that two years is too short, so in case we will stay with two Chairs, we propose that the term will be four years. In case we will expand to three Chairs, then I think it's fair to limit the term to three years and we would like to have a rotation mechanism, so every RIPE spring meeting, so just once a year, each ‑‑ the Chair who is serving the longest would resign and ask for ‑‑ and we will have some election or selection mechanisms, or in case it would be just two Chairs then this would happen every second spring meeting so every fourth meeting.

MARTIN WINTER: I want to adhere, just to make it clear, we don't think there should be a maximum term for the Chairs. The idea is that after that time one of the Chairs would resign and be open if he wants to for reelection, for other candidates.

ONDREJ FILIP: Again, it's up to you. And what we are really struggling with the most is the election process. We don't think it's really good idea to have any heavy formal process, and, you know, talking about the other Chairs, they have the same issue as well. So, what we would like to propose is we would have a call for candidates and the candidates should notice the Working Group one month in advance of the spring meeting and if we have one more one candidate then we would do some very simple process and something like just during the meeting or something like that, and if you could, you know, have like, two similar popular candidates, then we would either make some discussion among them, maybe if they can decide by themselves, or we can do just some draw or something like that. So some very simple process, we don't have to be formal and I don't think it's ‑‑ it will be a waste of time to have very heavy formal process for this.

So, again, if you have any feedback now, it would be very valuable and it would really help us. We will continue the discussion on the mailing list, so please subscribe to the mailing list. And we hope we will finish the proposal and, you know, present at the next RIPE meeting possibly with third election. So the floor is open, please let us know what you think about that.

MARTIN WINTER: I want to add something. I think our current ideas at the next RIPE meeting probably have first time like election or re‑election or something, so either have having a third Working Group Chair like election for the next RIPE or having one of us get re‑elected against other one, it depends what the Chairs think is right. So question is open, what you think? The whole idea is always the mainly how to make the best out of this Working Group, how to improve it or something in the long‑term. So we are trying to figure out what would be the best thing what you think it, how we should do it.

NIALL O'REILLY: I hope ‑‑ is this ‑‑ yeah. Free agent at the moment. I just want to recommend to everybody's attention, without promoting it, the ideas that Nigel Titley put out for the process in the database Working Group; it seems to me to be very simple, very lightweight, very effective and despite being lightweight it's actually quite formal.

MARTIN WINTER: You remember exact process, can you give a quick introduction?

AUDIENCE SPEAKER: Why don't you say what it is if it's so lightweight?

NIALL O'REILLY: It's very lightweight. As I recall, but the database Working Group mailing list is the authoritative source of the document ‑‑ as I recall it's this: When it's time for one of the Chairs to stand down, as you have already suggested, there would be an invitation to the group for phenomenon nations, the one standing down can be among those nominated and then there is a kind of negotiation phase if there are too many candidates to see, if among themselves, they can agree who should step down, who should withdraw as a candidate. If there are still too many the names are put in a hat and rather than have something that is not transparent like humming or an election or whatever, they are just drawn from the hat. So it's a random experiment and it's fair by definition.

MARTIN WINTER: So from the way I understand, that doesn't sound that there is input which would be the best choice beside the candidate themself. So I am not sure if ‑‑

NIALL O'REILLY: I think that the candidates are expected to pay some attention to input from the Working Group before deciding whether ‑‑

ONDREJ FILIP: The most important ‑‑

NIALL O'REILLY: Look the Nigel's description because it's better than what I have remembered.

ONDREJ FILIP: The most important thing is the negotiation phase, actually, that is the key thing.

MARTIN WINTER: The negotiation part, between each other, I heard from a lot of Working Group too, that is kind of an interesting thing. The worry is, if they can't agree on it like a random draw is the right thing I think at that time, personally I think an election might be the better choice. But that is something open to discussion, so please speak up what you feel like, speak up now or speak up on the mailing list.

SHANE KERR: I don't know why, there seems to be some tremendous phobia against a simple vote with the Working Group Chair selection process across all the Working Groups. I would like to put out my vote, which we are not voting, and just say a simple vote is not something weird, it's very straightforward. Test ask the people in the room. That would be my recommendation.

ONDREJ FILIP: So like having voting sheets or humming?

SHANE KERR: Just a show of hands. And if it's really close then I think maybe you need to figure something out. But ‑‑ trying to avoid voting by coming up with random selection processes and things like that, I mean, I don't know.

ONDREJ FILIP: We don't have that phobia, actually, we are discussing and we are quite open, so.

MARTIN WINTER: The goal is to get the best for the Working Group so that is part of why worry random thing because I suppose the Working Group itself knows best what they like.

ONDREJ FILIP: Any or comments, we have five more minutes, so please, don't he is hesitate.

AUDIENCE SPEAKER: No special affiliation. One comment I made on another venture RIPE Working Group is based on consensus so I would like to see Chairs selected on a consensus basis. If you need to vote at a very end or draw or out of a hat why not, but at least start to get the Chairs selected back on census.

Another point: If voting is required ‑‑ lets put it this way. The. The other advantage of doing it by consensus you will not exclude those members of the Working Group that are not in the room. This might be just those choosing to be in another Working Group at this time but it could also be those that are not able to attend each and every RIPE meeting. And I would definitely hate to see those excluded from Chair selection.

MARTIN WINTER: Thank you for the comment. I mean, part of it, when you say consensus, that is part between the candidates having discussion, I am not sure if that is what you think consensus, they would do first and then voting afterwards, is that what you are referring to?

AUDIENCE SPEAKER: Lets say I am uncertain about that. I would say candidates step up and discuss on the mailing list if there is consensus support for you. Does the mailing list have consensus that you are a good candidates. If there are five candidates that the mailing list would like to have, I mean, vote whatever, don't care. But at least every candidate has this support of the people.

AUDIENCE SPEAKER: Leslie Carr, I would say if we are concerned about people who can't physically make it about voting, why don't we have an IRC channel and have physical show of hands and be like, can we have IRC hands.

MARTIN WINTER: Keep in mind most of the other RIPE decisions actually vote in the people who actually attend. Most of the other RIPE decisions, if you are not in the room at the session then you basically don't have a right to vote.

ONDREJ FILIP: How many remote participants do we have currently, do we know that number.


ONDREJ FILIP: Compared to the number in the room it's probably ‑‑ it's not a big number.

BENEDIKT STOCKEBRAND: Probably going to be one of the next Working Group Chairs for six Working Groups, so we had a discussion on this yesterday and there is one thing I think you keep in mind, if you can get consensus by all means do it and make sure whatever the policy is, you don't block that. But that also means once you actually need the policy, that means that things are getting really, really difficult and vote or whatever or drawing from a hat or whatever is just a last resort basically, and it should be. I mean, this is not about doing voting or whatever on a standard occurrence but just for when things are about to get kind of out of control.

PETER KOCH: So just let, you know, DENIC and I promised myself to keep silent on this topic but then I regret it and so will you. This is in response to what Martin said and that I would have expected an outcry here. When you say that all the other decisions in RIPE are taken as voting, you are hereby invited, as my witness that the policy development process is heavily broken because you are so right and that is so wrong, which is your observation is probably a sign for a degenerate process in general, but that doesn't mean voting is the right thing to do. For the policy development and all the other decisions maybe we are having too little discussion and too little consensus building and just a show of hands at this Facebook democracy bullshit of likes and plus ones. To that extent, so, I like Jules' argument, if it's about consensus building maybe we should get behind the people. If there is a tie to break then drawing lots out of a hat is more random and that is probably better than a can voting system that can be gamed and of course invites people to game it by IRC hands or I don't know what. What we do here or could do here is not voting, because yeah voting is real democracy that means we have a defined elect rat. We don't have it here. This is random assembly of people in a room at some point in time, it's probably also random but the randomness is worse than what you can achieve by drawing lots out of a hat. And that is it.

MARTIN WINTER: OK, point taken.

AUDIENCE SPEAKER: RIPE NCC. I have a question from the chat room by Carsten introducing himself as a concerned Internet citizen. He is asking what about the profile of the candidates in this process?

ONDREJ FILIP: Well, as we said, we will have one month notice before the meeting so that includes also the profiles of the candidates, I believe these guys would send some bio into the list, I believe that is pretty standard.

MARTIN WINTER: We assume the selection in whatever form is during the meeting, but that there is a deadline for candidates who want to be like a Working Group Chair, that they have to speak up at least one month before it and basically introduce themselves and can basically tell you on the mailing list why they want to be Chair and how and what they like to do. Basically it's not the last minute thing, we want people at least one month before as a fixed deadline before, have to announce their candidacy. Does anyone have any ideas between two or three chairs, any feelings between one or the other, I am curious there?

BENEDIKT STOCKEBRAND: We also had this discussion yesterday during lunch. My personal opinion there is, if you have two people and they have different opinions, it's kind of difficult to come up with a reasonable decision. If there are three people, and I am not talking voting here, but one person realising that the other two, actually, have a different opinion, actually tends to make life easier. So, I suggest if you can get three people, go for it, it will make life a bit easier.

ONDREJ FILIP: It's true, we are roughly the same size and when we are fighting it's quite bloody.

MARTIN WINTER: Normally, I win. Anyway, we can have the discussion, too, we would end up on the discussion for three, then the idea is at the next Working Group we would have the first person selected and the re‑election of us would start in the year later if the first of us.


MARTIN WINTER: Basically back in general the next RIPE meeting in spring, we plan to have one of us standing up for re‑election or having a third person elected. So if you feel like you want to be a Working Group Chair you should start thinking about it and announce it at least a month before the next RIPE meeting in spring.

ONDREJ FILIP: I think that is the end of the session, we are finishing just on time, so thank you very much for attending, thank you very much that you joined the discussion about the Chair selection process and yeah, thank you, see you next time.