Plenary session

4 November, 2014

At 2 p.m.:

CHAIR: We are starting in five minutes. Can I ask Ondrej to come up and prepare. Thank you.

Welcome back from lunch, everyone. We are starting our session today. And we are going to actually start out with lightning talks, so they are shorter sessions, they are ten minutes each session or each presentation, and it's up to the speaker to allow for as many questions as possible so they can put five minutes presentations and five‑minute questions. Now, if they hit the ten minutes and they are still on presentations you can figure out that they don't actually maybe not really want questions, but I would recommend not letting them off so easily and catch them in the hallway. All right. So our first presenter is Ondrej and he will be talking about challenges in end points DNSSEC.

ONDREJ CALETKA: Thank you. And I work from CESNET which is Czech and REN and I would like to talk about DNSSEC validation, doing it the proper way. First something optimistic, let's say, we have currently the percentage of clients that doing DNSSEC validation is, somehow, growing, according to stats by Geoff Huston, which looks good. There are also some apps which can make use of DNSSEC like SSH fingerprints or DKIM or TLSA, the DANE certificate binding, but nobody does the validation correctly. Most current deployments are based on trusting A D‑flag from nearby DNS server and trusting that over untrusted network is wrong.

And there are some other ‑‑ some ways how to fix it like get the DNS library, but, you know, when, for instance, the validating resolvers do Google DNS insecure could be half of globe or something like it. So it's not very good in security point of view.

What can be done about it? So one project that is approaching by cz.nic, Czech domain registry, is this security research project called Turris, maybe you heard about it at the last ‑‑ on last RIPE meeting. There are these special routers which OpenWRT based operating system that are handed out to Czech home users and they are now deployed in about 1,000 households across Czechia and one of their security features is they use unbound as recursive resolver so the first mile is shortened to users' home only, which is still not the best but still much more better than the recursive resolver over the globe.

The programme is that there are two ways how to set up the unbound in the router. You can set it up to forward queries to the recursive resolver of your ISP or you can do full recursion, the problem is the forwarding mode almost never works well, and there are even few cases like ‑‑ one person of users of this router have even programmes with full recursion mode, so I dig a little bit deeper what are the programmes ‑‑ what is the problem with forwarding? It's actually known bug of most popular, BIND, name server, in all versions lower than 9.9, so even, for instance, the BIND that is the recursive resolver on this network at this meeting suffers from the same bug and you can try it if you install unbound on your computer and try to forward all traffic to this DNS server on this network, you will see that you cannot validate DNSSEC signed names that are synthesized from wild cards. It's tricky to find such names but since Czech top level domain have the biggest percentage of DNSSEC signed zones, there are quite a few services in Czechia that are affected by this programme. The users usually complain at forum from router that there is ‑‑ the route address is somehow wrong because they replaced it with their old router and everything worked well and they just ‑‑ so they blamed the postman and said the router is broken but the router is only pointing the programmes that all exist.

This problem has been fortunately fixed in the version BIND 9.9 or higher, but it will take years until all ISPs upgrade their name servers to some ‑‑ this current version. Even current stable versions, for instance, of Debian still have this problematic version with this bug.

So what about the full recursion mode. It doesn't scale well. I would not imagine deploying full recursion mode DNS server on all routers all over the world, and the other problems is especially found for some small ISPs that they do traffic engineering like DNATing everything, UDP/53 to their DNS server or third party like Google public DNS and they are doing it for years, nobody is complaining, so where is the problem? And even if they are not trying to do any engineering they just bought a security appliance and it's usually, for instance, killing our EDNS 0 packets because killing bigger than 512 bytes, even now.

So, this is the current problem with DNSSEC validation at the end point but it's going to be worse, let's say, for networks like this experimental network on this meeting, with NAT 64 and DNS 64 because if you have DNS 64, you cannot validate the synthetic AAAA records from DNS 64 so you have two options: A, is to trust the A D flag from DNS 64 which is what you probably shouldn't do if the network is untrusted; or you can do DNS 64 at your local host after DNSSEC validation but in this case your computer have to learn, somehow, that the network this connected to is enabled with NAT 64 and is ‑‑ and if it is which NAT 64 prefix is being used. And other problems with this set‑up is that even full recursion mode is still problematic on 6 only networks because even major IP enabled sites like Google still have name servers reach only over IPv4.

So, there are already some RFCs proposing solutions, so there is a list of proposal solutions, how can end point get to know which DNS 64 prefix is being used, and the most simple solution is query AAAA well‑known name which have only A records and then your records of resolver returns AAAA records it means NAT 64 is in use and you can use some heuristic to find which is being used. There are some attack factors that could be misused if this is not done properly this is why I would like to draw attention to this standard. If you are going to implement ‑‑ yes this is my conclusion ‑‑ if you are going to implement your NAT 64 in your network, please read this RFC and set things up correctly. Also, other conclusions that I would recommend is to deploy DNSSEC validation like you heard already many times, because if Google can do it, you can do as well. Please don't block even unintentionally or direct the UDP 53 packets of any size and due to the possible attack factors of NAT 64 it would be from my point of view the simplest solution is to use well‑known prefix for NAT 64 whenever possible because there are harder to misuse and when ‑‑ in case well‑known prefix is not for you, please read the RFC I have mentioned and set up DNS in a way that the client could validate, it's validated using DNSSEC, your network‑specific prefix. So, this is my talk about challenges in end point DNSSEC. And if there is any questions or comment?

CHAIR: Thank you, Ondrej.

Our next presenter is Jeff Osborn. He has an interesting talk: How the hell should we fund OpenSource. I like the title. You can run if you want, that's fine.

JEFF OSBORN: Hi, thank you for your time. I will try to make this fairly quick. I was offered the opportunity to spend a few minutes at both ICANN and NANOG recently and the question's simple and if fit's on one slide. This is not so much a presentation as a request for comment in the actual literal meaning of the question. How the hell are we going to fund OpenSource?

Over the last few decades, OpenSource has been a valuable part of networking and part of all of our lives professionally, I'm sure, in this room, but it's not obvious how we are going to pay for it. It's not obvious how it continues to be paid for. So, things like the heart bleed vulnerability made it obvious that there is a flaw and the idea that six guys spread across various parts of Europe with total donations of 2000 dollars a year ended up resulting in a problem that was felt around the world and it just showed there is something very wrong with the funding model like that. The recent bash, you know born shell vulnerability similarly shows the funding model is a bit askew in how this works, is still an open question. I am not going to posit a problem and say I have a solution but I'd really like to begin a discussion of how the hell this stuff gets funded. Commercial organisations submit bills, OpenSource organisation don't, and it's not obvious what we do with it.

So there are a couple of ways OpenSource can be funded and I am not sure any of the models has really proven yet. I am old enough to remember red hat coming along with a model of charging for support services. Everybody says what is the problem, just charge for support services. Well, you could argue the last statistic I have got shows that BIND 9 has 255,000 instances in place right now and a very knowledgeable new friend at the lunch table kind of laughed at me and said it's north of a million. So if you argue that, I can tell you with certainty that 120 people pay for it. So we are four orders of magnitude off. So, if the support model is the right model somebody else is doing better than us and it's it's entirely possible I am an idiot and my predecessors and a number of other OpenSource companies but I am going to hold off on that being right, although it's entirely possible, and go with other options.

The other model that gets proposed a lot is the premium model and you pay us money and we will give you working code or for the poor riff raff shitty code, and I find that morally reprehensible. That is really a non‑started. Beyond that you run into the whole range of bake sales and threats of kidnapping that I find that counsel won't agree to. So I am not going to do that. Like I said, this was an interesting chance to get five minutes to ask, can we start a discussion about how we are going to fund OpenSource and leave two open questions: One is do you ever feel like you are doing the wrong thing if you are working for a commercial organisation that makes use of OpenSource and you don't spend any money on it? Because if you don't then a whole range of opportunities I was going to try, aren't going to work. I am really happy, though, today, differently from the last two times I got to talk about this, to announce that we are moving forward with something; I have been talking with Benno and the folks at NL Net Labs makers of unbound and NSD that we have an infrastructure at ISC where basically a break‑even organisation, we have got marketing and sales and a support organisation and we don't ‑‑ advance security notification and those sort of things and they are trying to grow into that. So, we shook hands today and my sales and marketing organisation and his organisation are going to work together with no money changing hands other than if you have an OpenSource support issue, we hope we can collect the money, take money of it and hand it off and internally pay our own overheads. So for advance security notification, initially we will be doing that; it's our base level of support at ISC, it's base level of support for their product and hopefully in the first quarter we will also be able to jointly be selling and sliding cheques back and forth for the support model. So maybe that will work. Like I said, this has been an opening for can I have a comment, up now now and I would love a comment now on do you think it will help if OpenSource providers work together to be a stronger than individually.

And that is pretty much what I have got to say, if there aren't any questions?

CHAIR: All right. Shane, go ahead.

SHANE KERR: This is Shane Kerr. So ‑‑

SPEAKER: Affiliation.

SHANE KERR: I am speaking on my own behalf, I work for Dine, I used to work for this guy. Congratulations on NLnet Labs, I think that is good for both organisations. There is many aspects to this problem but one minor aspect of this is that it's frankly a licensing problem with ISC and with NLnet Labs which is that the BSD licence actually encourages company to take your code and not only not give it back to you but not give it to anyone. And it was sort ‑‑ it was a decision made explicitly on behalf of the various organisations that adopted that licence for various reasons, which may or may not have been valid but I think in today's environment I think it's the wrong decision; I think companies that are interested in a long‑term well‑being of OpenSource need to have a little bit of teeth in their licensing, something like GPL which requires the companies' users to carry it on forward. It's not enough and won't solve the problem but without it I don't think you can get much further than you are right now. And if it's at all possible you could look into relicensing your code, which would be great.

JEFF OSBORN: Duly note and that is good input, Shane.

Vesna: Internet citizen. My question is slightly related to what Shane said but to put it in a form of a question: Why did you choose to call it ‑‑ at least in your presentation ‑‑ OpenSource? Why didn't you say how are we going to fund free software? What was the reason?

JEFF OSBORN: I fear something has been lost in the translation. OpenSource, in my experience, is free software, free software is OpenSource.

Vesna: It doesn't say so on your slide, it says OpenSource. So from the PR perspective, some people would prefer to call it free software, I understand others would prefer to call it OpenSource, but from my personal point of view, I would prefer to see there free software.

JEFF OSBORN: Duly noted, if I am lucky enough to give this last minute in Honolulu, because it's been last minute every other time, I will take that up.

JIM REID: Just another guy that has wandered in off the street. I am not sure it's a question about licensing per se, I think there may be need for OpenSource organisations to be more proactive about generating cash and I am thinking the sort of people that are using OpenSource in their enterprises that perhaps were not good at contacting, certainly the people in this room, I am thinking of large corporations, banks, retail chains, people like that that have BIND and DHCP software inside their enterprise networks and maybe the approach there is to go to those guys and say look, OK it's a free software project, OpenSource project whatever, but surely from the point of view of your continuity of supply and stability of your organisations, you really should be paying a licence fee of sort to us, how about kicking in 5,000 bucks or 10,000 or something like that? And maybe having an approach like that which is a bit more business‑focused for business people but not diluting the notion of free software or OpenSource software. How about that?

JEFF OSBORN: That is an excellent point. In the future I think I am going to open this a little differently. ISC is break even and is doing pretty well, having come out of not having done very well, a year‑and‑a‑half ago we were sort of bleeding money, had way too many projects. We weren't getting money from people we should have been. We have turned around and we are break even and that is what kind of allowed me to stop and say, you know, look at the other poor bastards who can't raise any money, what is wrong with the organisation? We are doing it by just working really hard and calling lots of people and asking for money and it's been successful but it's a challenge and I think sales and marketing are not in the talent set of most free software/OpenSource organisations and so it's tough.

JIM REID: I quite agree and I think another possibility that just flashed into my head a few moments ago, maybe there are lessons that the OpenSource groups can learn from other charitable enterprises, I think in a US context with arts organisations rely on sponsorships and donations and fundraisers who go out to raise the cash or part fund the salary of a player in an orchestra or something like that, maybe that is an avenue worth exploring, not that I am trying to criticise the way in which the fund‑raising is done now, maybe that is more potential vehicle to bring in the cash that way.

JEFF OSBORN: That is excellent input, Jim, thanks.

CHAIR: One final comment from Benno and then next. Before I do that, it would be good if you can also highlight how you can continue this discussion beyond this presentation as well.

JEFF OSBORN: We really hadn't gotten that far. At the lightning talks I say this is a crazy idea you came up with, we don't have the follow‑up figured up with that.

BENNO OVEREINDER: NLNetLabs, clearly a stakeholder here. Thursday evening there is the BoF chaired by Meredith and we can continue this discussion over there also.

CHAIR: Thursday evening at 6 p.m.. thank you.

JEFF OSBORN: Thank you.

CHAIR: Our next presenter is Job, right on time. Protecting the routing table. Golden prefixes.

JOB SNIJDERS: So at NTT short plus department we are constantly seeking new ways or evaluating new ways to secure the routing table and there is this idea that two of us are seeking or soliciting community feedback for because it might be a terrible idea or it might be controversial and I need your inputs.

Let's go very quickly over this. Actual frustrations are stuff like the YouTube incident where the beautiful country of Pakistan went off‑line due to misconfiguration. People managed to import full routing tables and re‑set it and leads to disasters. And there are BGP optimiseers in the market that will insert more specifics for traffic engineering purposes but then there are software that had bugs where no export is not honoured and you are kind of hijacking prefixes. IRR, we use that to generate filters, these days, it's a very simplistic system. You upload a snippet of text and then data consumer can use those snippets of text and generate prefix lists. It kind of sucks because every idiot can create anything in this system. You don't know whether the owner of the space actually authorised creation of route objects, he might not even be notified that these have been created. And it's like a garbage bin; everybody put a lot of data in it so it works today but two years later when it's not needed any more there is no incentive to clean stuff up.

RPKI is a concept where there is a cryptographic flavour surrounding a top down model from RIR to IP block owner to authorising route announcements. And there are some issues with RPKI that will prevent me, today, from doing anything real with it. There are some legal issues in obtaining the necessary anchor points. The tooling is still young; it has not been around for years. What you can do with RPKI on the router itself is in a way limited on some routers. I would like to type in a lot of stuff but I cannot express it in RPL or JUNOS today and it adds a new protocol to your network and I am always very wary of adding new protocols. With ROAs with still have a risk of stale data because people might not clean up stuff when it's needed.

So this is the idea I want to run through, you guys, and maybe we can have some discussion about it on‑line or off‑line.

What we could do is have a repository somewhere or multiple, it doesn't matter, and have a form of SSL pinning but for BGP prefixes. The format of such can be extremely simple, you have a file with a list of prefixes that belong to a certain origin AS, we can wrap it in BGP to have validation that the guy belongs ‑‑ put in those prefixes and we can build tooling around it to upload data to routers.

This will be an example for YouTube, for instance, we know what they originate, we can generate a set of ‑‑ a prefix set and have a policy that applies to each and every BGP session where we will only allow from YouTube's origin the YouTube prefixes and drop from everybody else. The nice thing about this methodology is that it relies on proven technology like routing policy and prefix list and have been around for ten years. I can deploy this on every platform and not rely on RTR. Legally, it might be more interesting because a repository like that could be distributed under a different licence, which makes it easier for some companies to consume the data and recycle it in their router configs. And I should each autonomous system should make decisions what they want to accomplish. You might be only interested in YouTube, Facebook and Google, other big data producers and not care about the rest. That is up to you.

We could prevent stale data by removing stale data from the repository. If the prefix has not been seen for three months and you still list it as being yours, we could just kick it all out and keep it fresh that way.

A possible participation process could be a trust of ‑‑ a web of trust or that relies on a trusted third party that oversees the process so we could say you could only get into the repository if two people introduce you and vouch that you work for this and this company. And an auditor function could exist that prevents overlap, checks whether it is in BGP tables or not. I don't know, this is still up in the air.

Then on the consumption side of the repository, you clone the repository, you run the validator tools that repository's integrity is not tainted in any way. It could generate con fission to your liking, we could build some tools that provide with you a simple template‑based input so you could have it structured like you want in your network. Again, you can make the local policy decisions here which ASNs you want to use, golden prefix information from or not. And then a tool could push it to your routers run from con tab or Jenkins. This idea has been circulated as a lightning talk at NLNOG and now at RIPE. And I have personal interest in doing something like this at the large networks on this planet, to prevent shitty stuff from happening and it's something I could possibly do today because I am not introducing new technology in the network; I am merely adding an extra source of data to generate prefixes lists on.

The NLNOG foundation could take a leading role in governing this process and there could be a go to place if you want to have your prefixes in the repository. That is also all up in the air.

I think I have two minutes left. I am claiming two minutes. Please gave me some feedback on this idea. For instance, would you want your prefixes to be part of this repository? Would you use this in your network if it were available and you have a good feeling that the data is usable?

DAVID FREEDMAN: I think less of a question and more of a comment. I seem to remember up until recently having a prefix list on my routers called golden networks. I think it came from 178 from about 16 years ago. And was quietly kind of depicated by the Working Group at some point in the late 2000s. I do recall João had a website that published what these things should be and they contained things like root name server prefixes. I guess my comment is this smells like that and I just wonder if they we wouldn't devolve into arguing what should be on it and what shouldn't.

JOB SNIJDERS: The context in which the phrase "golden prefixes" was coined was originally that of halve ‑‑ that is a thing of the past I agree. The big difference is we would now provide a mechanism to contribute to that list, both on the data insertion side but also on the consumption side and I think 15 years ago such was lacking, so that would be a difference, that it's easy to get data in and out.

CHAIR: We have time for one more comment.

WILFRIED WOEBER: From Vienna University still Working Group chair for a few more hours. But more importantly, our small group has had a closer look at the RPKI stuff and I fully agree that there are very rough edges and quite a few loose ends with our RPKI. With that background, I don't want to rate your proposal either as good or bad or advantages or not. The thing that I am ‑‑ that I want to put into discussion is, why don't we try to sort of to repair the things that have been around for a while and had some flaws, like the IRR in the RIPE database, as opposed to just start over from more or less from scratch? And to implement something, whether it's called golden prefixes or IRR LG or whatever, I really don't care. My feeling is that we have so many different approaches and so many different half‑baked solutions that I am not sure that just trying to start over again from scratch is the proper way to do it.

JOB SNIJDERS: No way I would want to start over from scratch. I would view this as a complementary source to for instance, IRR, so the route has to be an IRR and absolutely you can have extra protection by adding it to golden prefixes.

WILFRIED WOEBER: If you want to do it that way and I think it's a good idea we would still have to fix the IRR in the database because the worst thing to happen is to actually to have one subset of operators ‑ building their filters based on IRR which is different from the ISPs building their filters from the golden prefix infrastructure. I am not sure whether this is going to improve the overall stability, just for discussion.

JOB SNIJDERS: It's a fair comment.

CHAIR: One quick comment from our on‑line ‑‑

SPEAKER: How do you feel with new announcements, who is the auditor? Only usable local policy is to drop invalids, it's not true. I know of US research net that is using up pref ‑‑ your trust modelling is awfully trusting.

JOB SNIJDERS: I disagree with the local preffing, I think it's useless. The auditor could be appointed by the NLNOG foundation as a ‑‑ well, I have to figure that out. New announcements, that is a fair question, I don't know. In general new announcements are less important than years' old announcements.

CHAIR: Thank you, Job.

Our next presenter is Jen Linkova and she'll be talking about stop thinking IPv4, IPv6 is here.

JEN LINKOVA: Hi. First of all, those of you who attended ENOG 7 in Moscow this year, you might safely go back to your e‑mails or get coffee because you have seen most of those slides already.

And now before we start talking about stop thinking IPv4, I'd like to spend a few minutes explaining the second part of this title, because I am pretty sure there are people in this room who might disagree with the statement. So, some people still asking when is IPv6 going to take over while I personally think it's time to turn off ‑‑ however, let's look at the picture which normally you can see in almost every IPv6 presentation. I happen to work for a company where we can do some IPv6 related measurements. So I just looked at the data. One year ago, RIPE 67, we had two percent of global IPv6 adoption rate, which is probably not much. However, I would actually care about the percent of my traffic. How year ago, 3%, and now, I am seeing almost 5. And 5% actually, I think it's quite a lot. So, and on top of this, when I look at the graph, what I'm saying is a kind of averages, because let's look at Europe; there are countries where we cannot see IPv6 deployed at all, for example, Russia. Actually, when I was giving this talk in Russia and ENOG this May, adoption was twice less so basically they increased it twice in the last six months. Well done, just four, five, six more years and you can get more than 100% of IPv6 there. I don't know who does this, but great. And we have Belgium, almost a third of the users have v6 enabled now. Germany and Switzerland more than 10 percent and I think it's significant amount of traffic. Now, I hope some people now agree with me that IPv6 is almost here.

Now, what do I mean about stop thinking IPv4? First of all, I am not going to talk about shall we deploy IPv6 or not as I can see plenty of time already have deployed it. I'd like to talk about how to deploy if you are going to do this and more specifically, I'd like to summarise some of misconceptions, I'd say, which I observed while discussing v6 deployment with some other people.

I think there is a kind of problem when people try to apply all the previous knowledge of how to deploy IPv4 to the new technology so they assume what we have, we just have almost the same tool we used to have, but we just a bigger address space. However, it's kind of different protocol; it has something which we don't have in v4 and I am not sure that we shall just ignore it. I think it might be a good idea to look at it and even if, for some reason, you don't want to use it, you might discard it after you look at it and decide not to use it, understanding what you actually not to use.

For example, there is a scope architecture, there is multicast, there is configuration and it might sound scary but on the other hand, it might actually be quite helpful to you and sometimes even make your life easier.

For example, it's RIPE, after all, we are talking about IP addresses all the time, so in IPv4 all addresses are equal and some more than others. So, in IPv6, actually, we have, as you might know, at least two type of addresses and address might have a scope; it might be address which means ‑‑ link local or global scope of address. Unfortunately, I have found that many developers apparently not aware of this and they assume that link local addresses might have significance and should be reachable at global Internet.

So, for example, let's say I have IPv4 network and I need an address plan and I need to put addresses on all those network devices I have, I am talking about networks mostly. And I need address for every router, I need/31 for point‑to‑point links, I need some networks for local, for VLANs and so on and I need to apparently manage/configure all the address space, right? I hope most of you not doing it in some spreadsheets and probably you have some kind of automation for this. Now we have dual stack. You might double a number of address space you need to manage configure and maintain, or you might look at another approach, which is not actually best current practices but it might be quite useful. You might remember that yes, we have link local address space, which means you just connect your devices, the outer sign local that you already have basically your network and all you need in case of v6 is just a ‑‑ for management so you don't need to double the address space; you just need to assign back address, for example. This approach is actually quite attractive. I can prove it works, at least in some networks, but you need to keep in mind that in some cases, that you, for example, you have more than one link between routers, your trace routers might be confusing or if one day, hopefully, fingers crossed, we might run RSVP on v6 only network, this approach might not work but I think people should be aware of possible design for v6, which basically was not possible in v4 at all.

OK. Now, v6 only network have a bright future, I hope. Again, instead of having like, what, seven, ten different subnets, we now have just five addresses, I think it's quite nice and simpler to maintain. Again, you cannot do it on v4, yeah. You might basically consider this design as something completely different from v4 deployment. So, address plan.

When I was working as an engineer for integrator and I was building all these networks for various companies, we always spent significant time discussing the address plan, oh shall we have 27 for this subnet or 26 or maybe even 25 or shall we number later and so on. It's actually waste of time in case of v6. You always should assign 128 for loop backs and 127 for point‑to‑point and /64 for everything else, just don't think about it. /64 all the time. I think it makes your life much more simpler. You don't need to think at all.

Yes. Another hot topic, I know IPv4 people love private address space so when network engineer decided finally let's deploy IPv6 and let's look at our configuration and just everywhere when I have IPv4 addresses configure, let's configure something similar for IPv6 and then, suddenly, on the firewalls, we could not have NAT any more and we don't have private address space so people keep complaining because they need address space, and some of them even try to read recommendation and even RFC and they found GUA address space which looks similar, almost like private address space, yeah. However, there have been a lot of discussions about using ULA as a private space or using it behind NAT or using ULA in networks which are connected to the Internet, and so far, my personal opinion that it's quite dangerous approach; there is some document which sum ‑‑ which is summarising ULA usage use case and using ULA as a private address space it's quite dangerous because you at some point might end up with something like address translation, which is horrible, or you might get into trouble with your devices trying to reach Internet ULAs and so on. Try not to look for private address space only because you used to have them in IPv4. In IPv6, we just basically don't have it, and what we have is not actually private address space in ULAs.

Host configuration here is a very, very hot topic. I know there are a lot of people in this room who will not agree with me here, so we use to configure our cost using DHCP for v4 so it's quite natural for an engineer trying to find configuration for DHCP 6. However, I don't like carrying things in my network which I don't actually need. Yesterday, on BCOP session we discussed at the very end complexity, do we need to introduce additional complexity? And from my point of view, having additional protocol would you have to configure support, troubleshoot, which might fail, so on, it's a kind of hidden complexity, because quite often, all you need to do is to rely on stateless auto configuration. Your device can get IPv6 address, you can even provide DNS information, finally most vendors support it as far as I know. To get your device configured and stateless auto configuration is here anyway, you could just not ‑‑ must not, I'd say, turn it off. And another reason for probably using SLAAC instead of DHCP is that you cannot rely on your ‑‑ on your end host supporting it. For example, I have this lovely smartphone which does not support DHCPv6 unless you run some very, very special software, which I don't even know where to get. DHCP support is not mandatory for v6 host. So, for example, if guess network not only all of your potential hosts would support DHCPv6. So unless you really have a reason to use DHCPv6, unless you need something which stateless auto configuration couldn't provide you, it's probably makes sense to think do I really need DHCP or not?

The same actually for first hop redundancy, we have routers configured with VRRP/HSRP, whatever you use for first hop redundancy. Again it's quite natural for engineer trying to, just to replace v4 addresses with v6. However, in many cases you don't need it; you just can use again router advertisements so your router can tell you, oh, no now I cannot be default this way, I am not configured as router. Actually it could give you even more, where is to control your network. For example, let's have two routers connecting to my network segment and I for some reason have a problem with one router and I completely turn it off because it's broken. Oh, I just reconfigure it so it's not acting as router any more and then I forgot about it, and it happens, and then I decided to upgrade software on another router, oh I assume I probably have another one in the network. If I just reduce VRRP and I only have one, I might still use it, it's only gateway and your host will be still using it, and I reboot it after software upgrade, and I found it was the last Gateway in this network. In case of using router advertisement ‑‑ I am checking my time ‑‑ I can tell the router please do not act as a gate way for those hosts and if you misconfigure something notice it immediately before you say reboot or shut down a router or do something much worse than just reconfiguring it.

And actually, yes, another interesting thing, I am actually not sure is it all implementations follow this rule, but in IPv4 everyone who tried to go through all these certification exams for different vendors know how much time they spent on please tell me are those to host to those addresses sound the same subnets, talk to each other, all traffic should go through default Gateway. In IPv6 it at least should be slightly different; your hostly normally assumes that everything is behind the can he fault Gateway except link local and only when it explicitly tells that you this particular address actually belongs to the same links, host might talk directly and it means that even they belong to completely different /64s, they could still talk directly without Gateway in the middle. Might have some interesting security implementations, you might think these are just the same but ‑‑ same VLAN but different addresses, they could not talk to each other which is actually not true.

Yes, and now let's imagine for a second that we finally deployed v6 in the network and now we are talking about dual stacking, customer facing services. In v4 yeah, we might use more than one A record for particular service and some people try to go to DNS configuration and just the same number of quad at records, which might cause a problem for broken clients, because let's say we have ten different AAAA records for, so next time you are trying to reach your browser will try first AAAA record, second, unless it has ‑‑ implemented and then until you run out of all AAAA records and only then your browser might try IPv4, which introducing significant delay and in how fast you can access particular service. So the best current practice is do not try to copy, try to minimise amount of AAAA records you have for your web server because it might actually help broken clients to full back to IPv4 much, much faster.

SMTP, it's a big topic I don't want to talk about it much because there is amazing article and you can see it on the slide. If you deal with please go and read it. To give you just some idea, you cannot any more rely on just white‑list based on addresses, it's always good idea and it's particularly good in case of v6. Surprisingly, situation with SMTP over v6 is not as good as it should be, I know, yeah, exactly because of all the ‑‑ I think we will go there eventually. I think we need to educate people about how to do this ‑‑ things properly. So I strongly encourage people to go and read this article. And I have been talking about dual stacking. The question is, do I really need two networks? Do I really need this legacy of data technology, nobody cares any more? Why, for example, do I need two networks to monitor troubleshoot and support, especially taking into account that they quite often can break separately? It's double ‑‑ it means a double the load to my network operation centre, engineers and so on. So you might actually consider still having just one network, not v4, but IPv6 only. And I know there are a number of success stories about this and I hope that next presenter will tell us more about it.

Oh, security, we have to look about security, right, it's very hot topic. About ten, eight, six years ago, I had a very similar discussion with people but about completely different technology, about wireless. They were saying no, no, we don't need wireless in our office. It's very insecure, we don't like it, we cannot control it, we are not going to implement it. The problem is that in many cases, those people already had wireless and their office implement it by the employers, who just came, to the desk, plug in some wireless stuff, some small box and got wireless in the office, which their network department actually wasn't controlling at all and sometimes they weren't even aware of having this very insecure technology in the network. I think it's a kind of ‑‑ might be kind of similar with IPv6 because by default most of end user devices have v6 enabled so at least they could communicate at least on link local addresses and if someone in the network start advertising, sending crowd advertisement, for example, you might actually get completely undesirable v6 communication in your network. So, as you will know, if can you not beat them, join them. So, I think it's much, much safer to get v6 deployed and configured and monitored than not to have it at all.

But again, how to deploy v6 from security point of view. Again, some people tend to just go to firewall configuration and ISC L and replicate everything, which is quite nice idea. However, in some cases it might not work as expected. Because as I mentioned, IPv6 gives us much more than v4 had, so we need also to think about security in this new functionality.

For example, when we put ACLs and firewalls we need to remember about link local addresses. We have to remember that ICMP is play critical role in IPv6, so I have seen just recently some people put in IPv6 ACL which completely denies all ICMP because those people I assume that we don't need ping. However, it basically breaks neighbour discovery on the link completely. So again, just try to think a little bit and not just copy IPv4 configuration into IPv6 part of your network. By the way, quite interesting topic: When we configure firewall filters, for example, and we try to match some protocols like TCP, UDP whatever, sometimes your device could not match, could not look deep enough in the packet because the protocol you are trying to match is so deep inside the packet, so the firewall could not see what is inside. The well‑known problem with router advertisement, with extension headers so, for example, if you have a radar on the switch which was supposed to block undesirable router advertisement, it might pass, let go with extension header because the device could not understand that this packet is router is advertisement and see it is packet with some extension headers. So again, you might need to check your filtering configuration, how does it support extension headers and other IPv6 functionality. And actually, this case when you can see on the slide it might necessarily be extension header, it might be during encapsulation. I think I am perfectly on time, so I am just trying to encourage people to do right thing. And before you can ask me a question, for which you have six‑and‑a‑half minutes, I just have one question for you: And because in most cases people came to me complaining they don't know how to do things, they don't have guidance, they don't have documentation so I put some reference on this slide and I might be slightly biased because I was ‑‑ I know about IPv6 I got from that book, but I still think it would be a good idea to read it. If you want to understand how the protocol works, mot just remember comments, how to configure that stuff. It has nothing to do with configuring routers; it's about how the protocol actually is designed. And now, as planned, five minutes for questions.

CHAIR: Thank you, Jen. Questions?

JEN LINKOVA: No questions?

AUDIENCE SPEAKER: This is man know in Romania. Did Google think about trying to build a cheap and good CPU wi‑fi for the home routers because this is our biggest problem in our network, we cannot go above 25% IPv6 adoption rate, just because the CPEs are too dumb.

JEN LINKOVA: I am afraid I am not aware of any of such plans currently. Which actually doesn't mean anything because many things happen in the company, right? But no, I don't think we can help with that right now unfortunately. But I am pretty sure this is plenty of CPEs, I hope, which now support IPv6. I can remember this discussion when CP vendors pointing fingers to ISPs to content providers and now at home my CPE at home works perfectly fine. They do exist, I understand you cannot replace them immediately.

AUDIENCE SPEAKER: Most of the ones that our customers use don't support it.

JEN LINKOVA: I do hope that they will just outdate it out of sale, out of life at some point, we will get there eventually, right.

AUDIENCE SPEAKER: Martin ‑‑ sorry...

AUDIENCE SPEAKER: I wanted to make a comment. I have been testing CP devices just randomly, I am from the US but there is five or six that have no issue supporting IPv6. The only problem I find is you have to configure them using IPv4, and so they still require that but with IPv6 I really have had no issues running them negatively.

AUDIENCE SPEAKER: Martin Seebe. I love this presentation because it's a compilation of highly concentrated choices of aspects of deployment, architecture. Yet, I have, I have a feeling of déjà view, of heated debates at IETF, whether the DHCPv6 is needed or not so I think that much of that, my belief is legitimate think, requirements from people who would like IPv4/IPv6 parity, whatever people with new IPv6 bring and try to change minds, I think that saying what you say is also legitimate, but that reminds me of very heated debate, which is, for me, a cause ‑‑ causes of stalling deployment of IPv6. I mean that if IPv6 people come with this, OK, IPv6 is a revolution, so you have to change everything, and people say, oh, I won't go to IPv6 unless you find to me some parity, I need really to be reassured, to sort of Def dialogue. And for teaching way, I am not very comfortable saying ‑‑ advocating IPv6, I am not very comfortable saying, OK, this is your start point because people maybe will not believe me so the credibility of these ideal choices have to be brought against reality, Ops people come from IPv4 and they need to be reassured that they will not lose everything.

JEN LINKOVA: What I am trying to say, I am sorry if I haven't made it clear, that again, you might choose DHCPv6, yes, I am just suggesting to think do you really need it, do you choose it because you got used to it or because you really need it? I think if people came from v4 background it's quite nice to stop and think a little bit. You might continue to the same direction it's perfectly fine. I intentionally did not put company name on the slides.

BENEDIKT STOCKEBRAND: About the CPE issue, yes, there are a number of old CPEs out there that don't support IPv6 and the best thing to do about it is just not doing IPv6 on the ISP side so people don't change those routers, until eventually the ISP or if it doesn't have a choice any more, and then there is some real trouble. I mean seriously, if you reason like this ‑‑ not you Jen, but v4 ‑‑ if you reason about CPE not supporting IPv6 so we can't do that as an ISP all you do is make it eventually very, very quickly happening to you and there is a good chance you won't survive that sort of transition at that point any more.

CHAIR: We have time for one very short question.

AUDIENCE SPEAKER: So my name is Bill Darte and I am on the Advisory Council at ARIN but I am only speaking for myself. I don't think that IPv6 is a technical problem; I think it's a business management issue, and I think the question that should be asked of senior management and marketing people in businesses everywhere is whether they want to be part ‑‑ do they want to be looked at as innovative and forward looking or do they want to be part of the whole Internet or just the old Internet.

JEN LINKOVA: I agree. I am just trying to speak about technical percent of this because there is some technical problems and some technical aspects which you have, yeah. I prefer to let business people talk to business people, I am trying to stand on network engineering as much as I could.

AUDIENCE SPEAKER: I understand, I just think that the technical problems will be solved when there is a business will to do that.

JEN LINKOVA: We might have another presentation on this but it's not going to be me.

CHAIR: Thank you very much, Jen.

The next presenter is Tore Anderson, IPv4 service continuity for IPv6 data centres.

TORE ANDERSON: Hello. And in my daily work I run the network and data centre infrastructure for a company that does managed services, so we have a multi‑tenant data centre where we actually operate all the tenants.

So, I am looking at how can I build on infrastructure that is durable and is going to last me for a long time and does not raise complexity.

And I'd like to say that I agree very much with Jen's presentation. IPv4, we should stop thinking that it's a requirement, that it's mandatory, because we have IPv6, and if you buy a server, you buy a load balancer, a firewall or install some sort of operating system or application server, chances are it supports IPv6 just fine, and there is no requirement to use IPv4 as the infrastructure of the infrastructure, as in the foundation of the infrastructure.

So, the thing is, with IPv4, it's very hard to get hold of it these days, and it's going to get harder, and even if you have plenty of IPv4 addresses, the other end that you are communicating with doesn't necessarily have that, so you are still have to deal with more and more hacks and work arounds to keep it running.

So, my message is to actually stop thinking about IPv4 and try to build an infrastructure on IPv6 only. And that will also prevent you from having to stop by complex ‑‑ stopgap solutions like dual stack and run two stacks at once, then you can just skip that step entirely and skip straight to the end game.

So, the thing is, still there are something like 95% of the Internet who does not have IPv6 connectivity, so even though you want to stop thinking of IPv4, you still have to do it in some capacity. But, like I said earlier, you don't have to do it as infrastructure; you can do it as a service or an app that runs on top of IPv6. And this is getting more and more common in different types of deployments. You have in mobile, for instance, you have 464 X lath which is used by several providers in production already. My phone does not have an IPv4 address, only has IPv6, but still IPv4 works because it's running on top as a service. I know that Facebook does something like this in the data centre. I don't know if they are using exactly the same type of technology as what I am going to talk about but at least their data centre internally is running only on v6. And the same with normal wired ISPs like cable Deutschland runs DNS LITE, I know there are other that are starting to run roll I couldn't tell, Map, for instance, and all of those are technologies that allow you to think of IPv4 as a back compatibility service running on top of IPv6.

So stateless ICI MP translation is one way to do this in the data centre. So what I want to design is a data centre that has only IPv6 inside. Or that can have only IPv6 inside. So all the servers and all the firewalls, all the routers and the layer 3 switches and load balancers and so on, should have IPv6 capability. And I want to build this without giving any thought about how to integrate IPv6 ‑‑ IPv4 in this. And especially, I don't want my colleagues who are working more on the serve and application side of things, to have to deal with IPv6 as a separate thing. They should only have to care about one thing. And so, IPv4 is something that I want to just have in the periphery. In all the way out in the edge, I want to translate IPv4 into IPv6 as soon as possible when the traffic comes into the data centre, and I want to keep the return traffic as IPv6 as long as I can on the way back. And that allows me to write ACLs that have only IPv6 addresses in them, I can now monitor only the IPv6 service addresses, and so on and so on. So, this makes it so that in my daily work and my colleagues' daily work, I don't have to think about IPv4 at all.

So, I am going to explain how this actually works in technical terms. And that is a stateless translation which is completely predictable, where you take the source address of the IPv4 end user in the Internet and you slap on a translation prefix, which is assigned out of your own IPv6 address space or you can use the pre well‑known prefix, while you take the destination address and you rewrite it according to a stating mapping between IPv4 address and IPv6 address. So essentially what I am doing is that I take the entire 32‑bit address space of the IPv4 Internet and I am mapping it into an IPv6 prefix, and that is just a regular prefix inside of the data centre.

So, if I am going to go through it in detail, I am going to start with these boxes in the middle there that bridge between the IPv4 Internet and my IPv6‑only infrastructure. And I start out by taking a/96 IPv6 prefix, like I said it can be the well‑known prefix or it can be a prefix that I assign out of my own IPv6 global addresses that I have got allocated from RIPE NCC or whoever, and I just route that using completely normal routing mechanisms like OSPF or ISIS or whatever, to those gateways. I also take a IPv4 prefix with, that I presumably one of the last IPv4 addresses that I have, I route them to the IPv4 interface of the gate ways, which, again, would be probably be advertised into BGP into the DFC using BGP. And this is a good place to put your last /22 that you get from RIPE NCC, for instance. And the configuration that I have to install on these boxes is that I have to state which prefix I'm using for the translation, in this case it's the well‑known prefix, 64 F.F. 9 B/96, and I have to set up a static mapping between an IPv4 address and an IPv6 address. And that IPv4 address is part of that pool that I have routed to the IPv4 interface of these gateways. So, in this example, I'm using 198510.10. And I am mapping that to the IPv6 address of the server, or it could be a load balancer or whatever. This would be the address that a native IPv6 client would connect to directly, typically.

And then I add an IPv4 DNS record that announces that whatever service is running on this load balancer or web server, is available on this IP address here. So, the client, when he wants to connect to my service, he looks up the DNS host names and he sees oh here is an address I can connect to. He doesn't have IPv6, presumably, so he will then just send away a standard packet, /TK*ES ined to that IPv4 address, which finds its way to the gate ways using /KPHRETly standard routing mechanism, so BGP typically on the Internet. (Completely).

And what happens next is that Gateway looks at the IPv4 packet it received and it looks at the source address, which is the client source address here and it checks in its config, do I have a static mapping for this address? No, I do not. So, then I'm going to fall back on using the/96 translation prefix, and rewrite this address here, the to 64 F.F. 9 B... and the same 32 bits. And this is by the way a valid with the IPv4 address, written like that.

Next it will look at the destination address of the original IPv4 packet, and check again, do I have a static mapping for this address? Yes, I do. So I am going to rewrite the destination field to the IPv6 part of this static mapping. And the rest of the packet, as in the TCP, payload and http payload or whatever it is, it just gets copied; it's not modified. So this mechanism does not alter ports at all; all the ports remain unchanged. So it's only happening on layer 3.

And when this packets gets to the server it looks like a normal IPv6 packet coming from a normal IPv6 client. So, you don't need any support for this technology on the server; it's just any standard IPv6 stack will do and any standard IPv6 network that connects the gateway to the server will also do because it's just plain IPv6. And any firewalls or anything like that in the path will just see this as normal IPv6 traffic and apply firewall rules as appropriate.

And so the server just responds to the ‑‑ to this packet, as it would a normal IPv6 packet, and then the destination address of this responds IPv6 packet is now in this/96 prefix that I am routing back to run of the gate ways using again completely standard routing protocols so it finds its way to the gateway. And the one very nice thing about this is that since I copied the original 32 bits off the IPv4 address of the client into the source address of the IPv6 packet, the server has all that information, which can be very useful if you are trying to track down somebody who is abusing your service or I want to do gee allocation, then you have the possibility to extract the original IPv4 address if you want to ‑‑ you don't have to, but you can. And that is I think a very important point for content providers to be able to do geolocation and abuse handling of their incoming traffic.

Anyway, I am sure most of you can get the last step, it's basically the mirror image of what happened the first time. Again, I'm looking at the source address of the IPv6 packet, this time, checking to see do I have a static mapping for this address. Yes, I have it, so I am going to rewrite it back to the IPv4 part of the static mapping. Same thing with the destination address, do I have a static mapping for that? No, I don't. So, I am going to fall back on just stripping the translation prefix because it falls within the translation prefix. Which means that the response packet looks, you know, the opposite of the initial request, and again, the payload is copied for everything. And so, the client sees this as well, this was a normal IPv4 connection, the service ‑‑ this was a normal IPv6 none of them need to have any inclining of what is going on here, they will see this works. So in my view, this is quite easy and simple to understand and I think that is what makes it attractive, because I like it when things are easy to understand, that means easy to debug and easy to keep going in the reliable manner.

So, this is not an IPv6 talk, per se, because I am not telling you how to run an IPv6 data centre. You should check out Jen's talk if you missed it or any of the other good resources in how to run an IPv6 network. But, this will enforce IPv6 because you cannot use this unless you have IPv6, so IPv6 can never be an afterthought; it is the foundation of the entire platform stack. And that is a good thing, because what I have problems with with deploying IPv6 is typically that v4 is the foundation and IPv6 is the afterthought and the afterthought often gets done prioritised and other things happen instead and you are left with IPv4 only. So, what I like very much about this is that it's OK, we do v6 and do that everywhere, and then you can actually have universal v6 deployment.

And since its stateless, that means that for performance purposes and scaling purposes, it scales more like an IP router than a traditional NAT 44 box or some sort of Layer 7 load balancer or proxy. And that is a very good thing, too, because the typical choke point for a NAT 44 box is usually the amount of connections that come in, if it's how many that are new per second or how many concurrent connections it has to keep track of. So, with a stateless approach, it scales like anything that does think per packet, like an IP router and if you implement this in hardware it should be easy to do it, you know, wire speed and I think also with modern server, you will also have no problems doing ten gigs of traffic through a Linux box using ‑‑ doing this in software, because it's very simple task to do programme atically with the IP packet.

So, another nice thing about it is that since its stateless, you don't have any shared information between the translators, except for the configuration of the static mappings and the translation prefix itself, so you can do equal cost load balancing between multiple routers and so increase capacity of the system. You can place them anywhere in your network and you can Anycast all the prefixes and it doesn't matter where the traffic ends up because they will all do exactly the same with the traffic. And if one of the routers fail, then that is not a problem either because then OSPF or BGP or whatever it is you are using will just take care of rerouting that traffic on to whatever remaining box there is.

So, another thing is that now, I am saving IPv4 addresses, because I am using only one IPv4 address per publically available service. Normally, in a data centre you might have thousands of servers per single website. Can have one IPv4 address used for the entire stack, if there is one site that is being run and I don't have to allocate them in chunks; I can do one and one and one address, so my last /22 can be used fully, all addresses can be used even though they are mapped up to completely different customers.

So, the thing is that I need the application to support IPv6. And for instance, like http and other application protocols that work fine through a NAT 44 will mostly work fine through this system as well, but applications that do not work fine through NAT 44 will probably not, such as FTP and you need your operating systems to support it. But there is a way to support application IPv4 only applications, whether there is application software that do not support IPv6 sockets or if it's protocols such as FTP which does not support translation and NAT ‑‑ and NAT in general, and that is by actually installing one of these little gateways as a software package running on your host, on your server, and if you are familiar with 464 XLAT architecture, this is extremely similar. So basically you get a virtual 4 interface on your server that has the same IPv4 address assigned as the public address of the service, so that the traffic that comes from the IPv4‑only end user gets translated to IPv6, hits the server, gets translated back and hits the software and the software then sees that as a IPv4 packet with the exact same addresses as it originally had and that even makes references to its own address work, so on application protocol like encrypted FTP which has no chance at all of working through NAT 44, can work completely fine through this system if you are using one of these little gateways running on the server, which I have called the host agent.

So, that is basically it. There has been a suggestion to actually make a little CPE that does this as well so you can support IPv4‑only appliances behind an IPv6 only data centre but that is yet to be looked into but there is no reason why this shouldn't work, either.

And yes, I am not going to highlight any but there are running codes and implementations that you can buy or download for free and you can run this today, both the gate ways themselves and the host agent that would support the IPv4‑only applications. So, if you want to play with it, you can. I am running it in production today and it works just fine. So that was my presentation. Thank you.

CHAIR: Thank you, Tore.

BENEDIKT STOCKEBRAND: A couple of things on this. I have taken a look at SSIT DC but the old SSIT original inspecks and the ‑‑ one big problem there is that ICMP errors need to be translated so it's not quite as trivial as they say. Nevertheless I do agree with your general statement that this is probably the easiest way to do this. Another thing, because I have had one customer where this actually turned out a problem, there are apparently some obscure content management systems which replace DNS names bilateral IP addresses and when ‑‑ you have they have a content management system which you feed some sort of text and it optimises it for speed by replacing DNS names with IP addresses. But these things do happen. I have heard several web designers doing this manually as well. When that happens, you do have a bit of a problem because as soon as you have literal IP addresses in http this doesn't work any longer. It's just one of those things that generally shouldn't happen but if you have enough occasions for this sort of stuff eventually one of them will get to you, so you want to be really careful and test these things out in a more ‑‑ I hope you don't take this personal ‑‑ in a more diverse environment than your fairly specific set‑up.

TORE ANDERSON: So the first comment about ISC MP having to be translated, that is true. The RFC, that specified how the translations are done, what you can translate is fragment of ICMP, you just can't. I found that to be not such a big problem. And for the second comment, if you have ‑‑ if you are using on purpose IP literals in your application protocols, whether that is not the protocol's own fault but the protocol that uses, you would need something like host agent to make sure that the addresses remain transparent and ‑‑ but again, if you are replacing addresses with your own address, you will won't work through NAT 44 either and that is going to be a problem in real life for you as well.

AUDIENCE SPEAKER: It works through NAT 44 in some cases if the address replaced in will is a global address. But anyway, it's just ‑‑ I fully agree with your over all statement but ‑‑ that this is probably the easiest way to go in a number of scenarios; it's just there are some pitfalls you want to be aware of and sometimes some really weird things you just ‑‑ they just come up and bite you in the as.

TORE ANDERSON: Would you need to test before production in case of any kind of deployment.

CHAIR: Thank you very much. That is it for this session, but before the coffee break we have one announcement.

CHAIR: Don't forget we are still trying to get candidates for PC election that is coming up so send your profile to Two seats, yes. It's two seats. We need more than two candidates, of course. Thank you.