Archives


Plenary Session
4 November 2014
4 p.m.


CHAIR: Hello everybody. If you can settle down and we'll continue this afternoon's Plenary programme.

My name is Mike Hughes, I am the session Chair for this afternoon, there should be somebody else with me but they're not here. When they do turn up, it will be great when they are.

Okay. So, our first speaker this afternoon is Stephen, who is from UCL, so a local speaker. And he is going to talk about challenges in building overlay networks and he is going to use Tor as a model for that.

STEVEN MURDOCH: Good afternoon, everyone. So many of you will know know Tor; essentially, Tor is a tool for using the Internet anonymously. That, in itself, is of use to some people, but what it then allows you to do is to build on top of it privacy preserving services, because if you start from a principle ‑‑ start from a basis of not having privacy, there's no way that you can build privacy on top of that. Tor gives infrastructure for building other services on top of it. And there are many possible uses for it and many groups who need to use something like Tor.

In this sort of community, where you might use it, is for studying network abuse. If you want to look at websites and not have that website be able to discover who you are, because then they will hide the abusive material, you might want to use something like Tor. But also militaries and law enforcement use Tor. Tor originally started at a research project in the US navel research labs and what they're interested in is protecting personnel and protecting people who are working with military or law enforcement, because they don't want the criminals or the enemy discovering the identity of the people who are going to try to harm them.

But, in the civil society, the people who want to use Tor include journalists and their audiences. If there's an anonymous source, a whistleblower, you don't want to have that person's identity be compromised because someone is analysing their network traffic. And another very important aspect of Tor is that it allows you to bypass Internet censorship, which is increasing through the world, and this allows people to access information freely even if governments are trying to get in the way. But to give a more motivating example, I was going to show a short video which gives an illustration of the types of people who are using Tor and the type of people who need Tor to work really, really well.

(Video shown)

So, many people in this room will have encountered Tor when they see the less good uses of it, but it's nice to see the face of the people who are actually using Tor and the people who are relying on it.

The way that Tor works is that there is a network which is run by volunteers and this network consists of Tor nodes. If someone wants to connect through the Tor network to a server, the way this work is that they first of all choose three nodes, the first one being the entry node, second one being the middle node, and then the third one being the exit node. They connect to the first one over an encrypted channel. Over that encrypted channel to the second and then finally to the third one. Once they have connected to the third one they can then leave and connect to whatever they want to connect to which is anything that supports TCP, but most commonly now this is connecting to a web server. The important thing to remember is that all encryptions through the Tor network are encrypted but this final hop is not unless the application protocol is encrypted itself. So, if it's had https, that's fine, but if it's unencrypted POP 3, then that is very bad. So this is how the vast majority of people use Tor. But there is some other options. One is that if you live in a country where access to Tor is blocked, and this is not very hard to do because a list of all Tor nodes is publicly available, then people are not allowed, not technically able to connect as normal. What they can do is connect via a bridge down here. What a bridge is is a specially configured Tor node than runs the same software but is not publicly listed. The users have to somehow discover its IP address either by accessing a website or getting it by e‑mail or finding out about it it from their friends and they can use this as a gateway to get into the Tor network. Then a third way of using it is hidden services and hidden services have been in the news in the past few days because Facebook has now made its website available over hidden service. And the way that this works is that the user connects into the Tor network as normal. But not to an exit node, but to a node called a rendezvous point. The hidden web server, or any type of server, then also connects into the Tor network and then connects to the same rendezvous point. The two TCP streams are joined up and now the two can communicate and this works even if the hidden web server is behind that or a country with censored Internet. So it gives both more security because content is encrypted all the way through this path. It gives privacy to the web server, because it is anonymous as well. And also, it gives better accessibility if something like NAT is in the way.

And the way that these are accessed is through a special domain name. They end up in .onion, we have had a question about that before. This is not a valid top level domain but what the Tor software will do is recognise that this is a .onion address and look at the thing in front of it and what this is is the hash of the public key of this service. What Facebook did is they generated lots and lots of public keys until they found the one they liked the look of but in practice it would just like like jib Irish and the advantage of this point is it's self authenticated. Tor will only allow you connect to the server which has a corresponding private key for the public key in the name, so it also avoids all the problems that we have seen with certification authorities.

Going into it in more detail. The way that the encryption works is known as telescoping routing, you can sort of think of this shape as being like one of the old telescopes that you have seen in the movies. And what's happening here is that the user is connecting to an entry node. Negotiating a key using ‑‑ and then using this key to make an encrypted tunnel, in this case red. Through the red tunnel, it does another handshake and connects to the middle node, and gets another key, and then through that tunnel it connects to the exit node. Any traffic that comes in is encrypted three times and then at each point one layer of encryption is removed. Similarly, in the backwards direction, one layer of encryption is added, and as a result, the communications will only be able to get through the connection if it follows the route that's chosen by the user and we'll come back to why that's important a bit later.

In addition there is another layer of encryption. Each pair of connections, so user to entry node, entry node to middle note, middle note to exit is encrypted with TLS, the same encryption that's used in web browsers and there is some effort in Tor to try to impersonate the behaviour of web servers to try to make it a bit harder to block.

Now, I mentioned that it's the responsibility of the Tor client to choose these three nodes. In order to do this, it needs to know about the network and so the network information needs to be distributed to each of the users. The way that this works is all the Tor nodes publish their information in the form of a descriptor to the director authorities. The directory authorities, there is seven of them, get together, and then negotiate a consensus which gives a unified view of the network. It's important that everyone has the same view of the network so they all appear the same to anyone who is carrying out surveillance. This consensus can be given to the clients directly or they can go via mirrors, normally every node will act as a mirror for the director authorities.

And of course, there is digital signatures involved to make sure that only a majority of the director authorities are able to make any changes to the consensus and these directory authorities are run by individuals who are throughout the world and so it's hopefully hard to coerce all of them simultaneously.

Now, because Tor is an anonymity network, it's hard to collect information so a lot of effort has gone into trying to collect information in privacy preserving way. One thing is quite easy to collect is the number of nodes, number of relays, and here is a graph of a growing between 2010 and today. So, roundabout 2,000 in 2010, and it's gone up to about 6,000. You can see a drop‑off here. This was because of a Heartbleed. One of Tor's weaknesses is there is only one implementation and only open SSL was supported so Heartbleed was a serious problem. There was a scan of the Tor network by the Tor project and any nodes which was found to be vulnerable of removed. So that's the reason for the drop and then as they got fixed they were added back.

The bandwidth of the Tor network has also grown significantly. It's now up to about 10 gigabytes a second, and one thing to note from here is that the total capacity is about double the usage. That's dramatically more than a normal network and that's one reason for some performance challenges.

The other thing that is quite tricky to measure is the number of users. But this is important. So there is now a privacy preserving way of measuring users and you can see a dramatic jump here. This was roundabout the same time as the Snowden leaks to initially that looked like it might be something related to that but it was unrelated. It was a botnet which was using Tor as command and control. And at this point, Microsoft added it to the list of software which was going to be removed from Windows, and there was a dramatic drop off, but there is probably still some those BotNets just sitting there idle, not getting any responses.

The other thing that's very interesting to measure is Tor's performance. And many people have used Tor, they found it to be a bit slow, and back in 2010, that was definitely right. You could get really good performance sometimes, but the variance was huge and because people were doing web browsing, unless you get the whole web page, none of it will display in most cases, the end result is that Tor is slow. But as the network capacity has increased, and also as the way of managing congestion has improved, then you can see there's been a quite dramatic decrease in both time it takes to download a 5‑megabyte file and, most importantly, the variants in the time. And you can get more information about these on the Tor matrix web page.

So, there is a number of challenges that have been faced while deploying Tor. The first one is that Tor is source routed. Now, this might seem insane; source routing in IP is long gone, and for good reason. But, this was introduced for security. Now, let's suppose that there is some nodes in the network which are malicious, a connection is compromised, the first need is compromised and the last node because then attacker can correlate traffic coming in and traffic going out and know what user is doing what, and that's exactly what Tor should try to prevent. Now, the probability of the first node being compromised, let's call that P, and the probability of the last node being compromised, let's call that P as well. The probability of both happening simultaneously is P squared. If 1% of nodes are compromised, .01% of paths is compromised so that's reasonably good. The alternative is peer‑to‑peer networks, and that's where the network actually manges the routing, closer to the way that the Internet is. Now, the problem with that is that as soon as you connect to the first bad node, it will only tell you about the bad part of the network. Because it's controlling your view. And that is called the route capture attack, it means that if your first node is compromised and that happens with probability P, then you're toast. So, if the probability of a node being compromised is 1%, the probability of paths being compromised is also 1%. So that's 100 times worse than before, so there is a good reason that source how long is used.

That has challenges, particularly around the distribution of network information. Every client has got to know the equivalent of the global routing table. And this can be quite large. So the way that this is now managed is there's not one directly per directly authority, they get together and make a consensus, but the other thing that happens is each node only needs to know a little bit of information about the Tor status, this is the mini descriptor, rather than all information that's necessary, and that again reduces the amount of information they need to download, which is particularly important if you're in a country with limited Internet access.

And the other thing that you need to learn is enough information to load balance, because the network cannot manage the load. It's got to be the responsibility of the client. And as a result of the efforts of Tor, this has improved somewhat but it's still a problem for the network being made significantly larger.

So, on congestion control, the reason that this is hard is because, basically, there are too many users and not enough bandwidth. 50% utilisation is incredibly large for an IP network, IP networks are something like 3 to 5%. If you try to 50% load your home Internet connection, then you'll be cut off pretty rapidly. The other problem is that the standard approach to congestion is to drop packets but that's not possible because the encryption I described is using counter mode and if you drop any packet, then all future packets will be corrupted. So, there is no way that to drop packets you just have to ask users to send less data. And also, it's running on top of TCP, and this has all the advantages of TCP, it's a very nice protocol, but one problem is head of line blocking. There are many possible circuits being sent over the same TCP connection. If a packet is lost from one, then all the other ones are going to be delayed as well.

So there is some proposed changes to try to improve this. One is to replace with TCP, but you can't run TCP over TCP and get decent performance, so there's got to be something that's a bit like TCP within Tor and this could be TCP itself running user space or is could be something a bit for sophisticated like UTP, which is implementation of the led bat latency base congestion avoidance system. And because it's no longer using TCP, it uses UDP, there will need to be some different encryption system which can handle packet loss. One possibility is DTLS, and that sounds really nice but whenever a TLS problem happens DTLS is hit ten times as badly. It's not really clear. It's better to use DTLS rather than making something up ourselves.

This is what the stack could look like if Tor changes network protocol. So, currently, the application, say, a web browser, connects to Tor over SOCKS, and that's what this gateway layer is responsible for. There is a multiplexing of multiple TCP streams over a single Tor circuit and that's what stream layer is responsible for. The authentication layer is to prevent the exit node manipulating traffic, and then there is a circuit encryption which then runs over TLS which then runs or TCP IP. The way this would be changed is there is still going to be socks and stream multiplexing, still going to be an authentication, the circuit layer is basically unchanged but underneath it is not TLS, but some sort of TCP‑like thing. It might be user space TCP or it might be UTP or something like that, maybe FETP, that runs overs DTLS and that runs over UDP. The advantage of this approach is at this point, cells can be dropped if load is too high and then the congestion control will back off and you hopefully have much queues on each Tor node.

The final challenge I am here to talk about is Tor's network topology. Tor is a clique in which any Tor node can be connect to go any other Tor node, consequently any Tor node can be at any point in a circuit, it can be the first node, second node, third node. And as a result, every node must be able to connect to every other Tor node, and what's more, because these links are kept up as long as there is traffic flowing on the link, in practice many Tor nodes will be connected to every other Tor node. So the first problem is there's going to be a lot of linking that have to be kept up and that causes a lot of problems, particularly on Windows, because you start running out of socket resources. But it also means that Tor nodes in IPv6 only nodes are not useful, because they are not able to connect to IPv4 only nodes, and there is still plenty of those out there. And also, it's not optimal for mixing traffic. Ideally, you want all the users to look the same. But when you have got two users using a same node, but it's say the first node for one user and the second node for another user, someone observing is able to see that and then these two links are not mixed together.

So if you use the clique topology, you get something like this. And this has order of N‑squared links. It's actually N times N minus one over two. So that's a lot of links, the extreme moving away from that is something called the Cascade, and this is a topology that is used by the JonDonym system, which came out of the University of Dresden, this is much more efficient in terms of number of links. There is only ON, but disadvantage, it's very easy to trace traffic because if you see something coming in here you know it's going to come out there. If you see something coming out there, you knee it came in here.

So there is a hybrid approach, which is called the stratified network. This still has ON squared linking but it's still less than the clique topology but it has the advantage that the same node is at the same point in every connection going through it. And this increases the amount of mixing between two different users who are using the same node. And there's some metrics for anonymity which shows that is that is the optimal design. In fact, Tor almost does this already. I said it's almost true that every node can be every point. But actually if a node is too unrep liable, it's never going to be the first hop. And if a node doesn't allow traffic to exit from it to the rest of the Internet, it's never going to be a last hop. And then the middle nodes are known that can only be in the middle so they will be preferentially weighted to be more likely in the middle so what you end up is almost this, and that's going to be ideal for some purposes.

So, Tor has got a lot of questions still to come up. One is how does it get more bandwidth? And that's where it needs a lot of help. Bandwidth is provided by volunteers, and people who are running computers with money which is donated to them. And more bandwidth is needed both at the exit node and at the middle nodes. So you don't have to deal with the hassle of running an exit node to still contribute to very valuably to the network. And what we have seen in the case of people having nodes taken out as a result of Heartbleed is that every little bit of bandwidth helps provided the knows is fast. Really slow nodes are not used, but apart from that, anything helps.

And also a lot more development is needed. I was very interested in the talk on how do we fund open source? It's a continual difficulty, how does Tor continue to run given that the software can be downloadable for free and there's no way that you can realistically charge a Syrian dissident for using this software and it would be very sad if people like her were not able to use Tor.

And there's lots of different aspects to Tor. One is how to resist censorship which I didn't talk about at all. But it's really interesting. And the other one is how do you actually get the user experience to work, how do you make it easy to use?

And because Tor is so important for many people, it's not very easy to make changes because maybe they'd go wrong, so principles and tools for trying to change the network, to grow the network, to make it just better to use would be very valuable.

And then finally, ideas for how do you safely measure this network to get the answers that are necessary to deal with the previous issues but still not put users privacy at risk?

So, I'm happy at that take questions now and also I'll be here for the rest of the day in the reception in you want to talk to me about anything else.

(Applause)

CHAIR: And we have got some time for questions.

AUDIENCE SPEAKER: Thanks for the explanation. You asked ‑‑ you stated we need more bandwidth. What is the average that a typical middle or exit node requires?

STEVEN MURDOCH: Really, anything more than a dial‑up modem will probably be used, and there is a huge amount of variants between different Tor nodes. So, the bottom ones are probably only a few hundred kilobits a second. The top ones are probably megabits a second, possibly even more, I'd need to look at the statistics to see. But the probability that a node is selected depends on its bandwidth. So basically Tor will use up as much as you can give it.

AUDIENCE SPEAKER: Is there a specific limitations that you would put on an exit node or a middle node knowing that you know probably the majority of the people here in the room can provide more bandwidth than you have currently in the complete network, are there strategic reasons to say we only want to have X amount of bandwidth within a certain network to avoid things?

STEVEN MURDOCH: I think that would be a great problem to have. That is not the case at the moment. If it became the case that a few people were a significant proportion of the bandwidth, then something would probably need to be done about that. But Tor is nowhere near in that situation yet. If it was, the directory authorities do have a tuneable parameter where they can cap particular nodes if necessary and also if abuse is detected they can kick nodes out entirely, but realistically that's not a problem.

AUDIENCE SPEAKER: In the same line of thinking, specific numbers of exit nodes or middle nodes in a specific network or sub‑net or would you prefer just two or three in a network and that's it? If it sponsored that kind of thing and what kind of equipment would you require for an exit node and a middle node? The reason why I ask is there are a lot of ISPs here in the room and this is ‑‑ those are probably the questions that you know, you would want to have an answer on, and then, after that, I'll have another one for you specifically on abuse handling.

STEVEN MURDOCH: Okay. So, in general, diversity is good. If there is too much on one ISP or too much one by one individual, then that's something that needs to be looked at. In terms of hardware, there is a mailing list where people who run the really fast Tor nodes talk to each other, and that's the best source of information. Tor runs on pretty much any platform that you might choose to use, but when you start tuning it up into gigabit links, then there is some tricks and that's the sort of thing that other Tor node operators have published about.

AUDIENCE SPEAKER: Okay. Then my final question: How to deal with the abuse you might get from running an exit node. Are there tools to say well I would not like you to use SMTP on my nodes or how do you deal with that?

STEVEN MURDOCH: So, every node has a set of policies associated with it, one of which is the amount of bandwidth it can use and then this can say things like you can use this amount of bandwidth per month but no more than this peak and no more than this average. So that deals with one set of costs. The other one is the exit policy which says which IP addresses and which ports can be accessed. So, by default, port 25 is blocked to get rid of mail, and then you can change this to almost arbitrary definitions if you want to block on the basis of block IP addresses and port. This currently is the limit of customisation, because of the source routing the policy has to be sent back to the user, so it's just IP address, masks and ports.

AUDIENCE SPEAKER: Speaking for myself. If you want to donate stuff for the Tor project and you don't have the time to run these exit nodes or take care of the abuse or the project called Tors are can.net, they take care of the maintaining this stuff and also the abuse and have lawyers especially for that case and you just need to drop them a box, some bandwidth and power and it's done.

STEVEN MURDOCH: Yeah, so that's really useful. There is a few organisations like that. There is Tor server, there is nodes bridge, they cannot get too big because we come back to the previous issues, but the advantage that they have is that the more bandwidth you buy, the cheaper it gets and there are definite economies of scale there and they also deal with a lot of administrative issues so they are very useful.

CHAIR: Okay. Well, we are just fitting into the half hour, maybe a minute over, but thanks very much, Stephen, for that interesting talk.

(Applause)

I would now like to invite our next speaker to the stage, that's Tatiana Tropina. She is a security researcher who is going to talk to us about basically what happens when we have got our existing bottom up management process and what happens when that collides with mandatory requirements and regulation. Thank you.

TATIANA TROPINA: Thank you very much. So, just in case, if you are wondering what a lawyer is doing here presenting at the technical community, I want to make reference to RIPE NCC academic corporation initiative which brought me here and thanks to RIPE for bringing me.

And secondly, why am I here and why I am going to present on cybersecurity. I think it's very important to understand for the technical community how the processes which are going on now on the European level in this field can influence the way you operate or you used to operate. Yesterday, at the Opening Plenary, we had a Dalek saying that it can destroy Internet easily by implementing just top down approach, strong top down approach. And where it's going now in the field of cybersecurity, I don't think it can destroy the Internet, but it can significantly influence your day to day life and operations.

But before I start, before I move to the content, to the substance of my presentation, I would like to say what I'm actually going to talk about, because cybersecurity is, the word itself, means several things, and they often get misused and confused creating misconceptions.

So cybersecurity can mean, cyber crime, critical information infrastructure protection and resilience and also national security threats and cyber war and there are no fine lines between these terms. So, I would like to highlight that today, I'm not going to talk about cyber crime. This is different field of regulation and technical community does expand on this regulation but it refers to criminal law, so we know what crime is, we know that law enforcement agencies are dealing with these, we know that some non‑profit organisations are trying to deal with consumer protections or data protection so we know that there are plenty of frameworks existing there and this area should be regulated but when it comes to critical information of data protection and national security, it's totally different.

I drew this cybersecurity pyramid maybe one‑and‑a‑half years ago to explain the differences. Because we have cyber crime on the bottom of this pyramid as the most tough regulation, because it refers to criminal law and criminal law is the highest degree of governmental enforcement. But, if we come to critical information infrastructure protection and national security, they have been not been so much regulation yet, because critical information infrastructure protection mostly refers to industry and operators of this infrastructure. And national security refers to the level of foreign policy and diplomacy so it's not criminal law and maybe it's not regulation as we can imagine these. And just to make it understandable, where do I draw the line between cyber crime, which I'm not going to talk about, and regulation in cybersecurity, which my presentation is about?

There was a case one year ago in the European Parliament, you might know about this after the Snowden revelations, European Parliament had hearings on hiking of bel.com and swift information systems, and ROb Wainwright, the director of Europol, made a statement that this hiking cannot be investigated by Europol because Europol is dealing with cyber crime and not with national security. Even upon request from the Government. So ‑‑ and I would draw the clear line here, in cyber crime we need to attribute so the whole machine, the whole process is working for investigation, finding cyber criminal, attributing and then prosecuting. In critical information infrastructure protection and national security, it's more about recent mitigation and resilience and when it comes to political information infrastructure protection. For example, a recent report from European Network Information and Security Agency says that the most important threat, the most dangerous thing is now not actually hackers, the most important thing is humour errors, technical failures and weather disasters, so it's not always about crime and it's not always about finding who is behind this attack or behind the monitor.

So, coming to public‑private collaboration and multi‑stakeholder environment as we know these. In terms of national security and critical information infrastructure protection. There has not been that much regulation in Europe, so we all live under the notion that my network, my rules. I have to make it secure on my own. If I'm providing customers any services, I have to make them trust in me.

So the important role of the industry in these processes was always recognised that industry is making the network secure and everything was based on the idea of voluntary cooperation, multi‑stakeholder environment, I believe it was influenced by several things. First of all, US approach to all these, to the developments of information networks and leaving this issue of security to private industry, and providing some incentives to cooperate.

Secondly, I'm coming back to cyber crime, it's the experience we had in cyber crime investigations and prevention. Because, for example, they were already some private hot lines for reporting and blocking illegal content, not notice take down quads of conduct, so industry of participating. But I believe that the main reason which technical community and industry remained unregulated was that no one actually knew how to regulate this. Because it's such a complex environment in terms of national security. Before, in previous times, in the physical world, national security and infrastructure protection always meant physical aggression. So it's just an act of physical aggression and when aggression comes from cyber space, there was no agency responsible for addressing these issues. And it created a patchwork of different legal regimes, for example, if you will ask me or anyone from the European Commission who is responsible for cybersecurity on the European Union level, yeah, we shrug the shoulders because it's a complex puzzle and no one knows how these pieces are coming together, because there are so many agencies who are responsible in some areas and sometimes those are overlapping. I will not bombard you with acronyms of these agencies, but, for example, BEREC, the regulatory body on electronic communications in the European Union; UNIS, network and information security agencies, there was some attempts to create public‑private partnership like a partnership on European network resilience which remained only on paper.

So the first red flag which I saw, which I detected, about moving from this notion of multi‑stakeholder, multi‑faceted strategies and leave industry alone concepts to regulation was last year in February when Europe adopted European cybersecurity strategy frameworks where highlighted the necessity for public and privity collaborations, they introduced, at the same time, the idea to impose reporting and information sharing obligation on the industry. And for me, that was a kind of controversy because you either provide incentives for the industry to make something voluntary or you impose those obligations.

And now I'm asking what actually happened with recent development to this voluntary collaboration idea?

On the European Union there is now a big discussion on the development of network information security directive. It's still a draft. It was introduced by a urine commission first and it introduced mandatory reporting of cybersecurity incidents, instead of voluntary collaboration, if you think that it was only about critical information infrastructure banks and so on, you are mistaken because it included all information society services. It included Cloud providers, it included ISPs, it included the first draft. So, whichever entity ‑‑ almost whichever entities you can imagine.

In March 2014, this year, European Parliament was discussing this draft and they removed almost everything from it and left only critical information infrastructure providers, but still included Internet exchange points as the subject of cybersecurity regulation, but this is not the end of the story. This is how this draft was adopted when they were just submitting my proposal for the presentation, so I will come back to the directive a bit later.

The second worrying trend for me is what is going on at national level, because in Europe there is no unified approach. For example, Germany recently introduced draft IT security law which not only demands critical information infrastructure providers to report cyber incidents, which is in line with EU directive, but it also requires telecommunications providers and ISPs to notify the Government in case of any incidents which can have as a consequence crimes against integrity, confidentiality and availability, which is a lot. And then it obliges information society service providers, the so‑called tele median, which has been ‑‑ which had been excluded at times from the EU directive to implement first of all protection measures, technically, and to secure the ought authentication procedure, meaning that Government is going to regulate what kind of technical measures have to be adopted by information society services.

Actually I don't like when presenters are bombarding the audience with quotations but I do have to use a few to give you the idea of what is going on around. So there are proponents of regulation and it seems that this draft directive which was adopted which the European Parliament, this edition of draft directive, it didn't make anyone happy. It was a kind of bad compromise. Because the proponent of regulation, proponents of regulation they told that this ‑‑ this waters down the proposed regulator's scheme and the bad guys must be laughing their heads off, and you can see that this refers directly to cyber crime. But I believe that this is a very wrong notion because cyber crime is something different. So it's not about protecting consumers from criminals. But, still, this is a measure argument for proponents of regulation.

Opponents of regulation have much more arguments and they are also unhappy with the current draft of directive. But I have to go back to proponents and say that, this assumption of the necessity to regulate technical and providers is based not only on cybersecurity and national security concerns, but also on the idea that it would be good for customers. So if providers of information society services, Cloud providers, web hosting platforms would implement some reporting obligations and some cybersecurity measures mandatory, it would be good for customers, because customers would choose between trusts and not trusted providers. And this assumption comes from neoclassical economic theory that customer can choose security and reputation over risks, which is plainly just theoretical. There have been no evidence about this theory and rather, they have been contrary, that customer performance lower price, maybe higher risk but quicker gains.

And then, other concerns related to regulation which already existed or is being proposed or discussed. For example, how this mandatory reporting obligation relevance to data protection reports because maybe you know that on the EU level data protection directive is now discussed, and a lot of providers and data controllers will have to report security breaches. Then, if we come to critical information infrastructure, banks are due to report to systems, they are reporting suspicious transactions and they reported security incidents, well telecom operators are reporting but they are at least excluded from this directive.

So, again it creates a patchwork of different regulator regimes and it's not clear how they are going to come together and match.

The next concern that there are already some voluntary information sharing mechanisms, they are regarded as trusted. So why destroy them? Why destroy them and impose a new regulation with uncertain results, which would be costly, which might lead to totally different approach to cybersecurity, reactive instead of proactive. It was not clear how it will influence the research and development, and also, there is a concern that those obligations will fall on entities which are already doing something, which are already collaborating, and it might undermine the whole concept of public‑private partnerships which I believe are underestimate and wrongly understood on the level of European Union. Because what national security people are saying is that private industry is not collaborating, that the interests of the industry and governments and general public are not matching when it comes to national security and information infrastructure protection, which I believe is very wrong. Public‑private partnerships means it's not only industry which is investing money and efforts, it's also Government which have to back up those partnerships with efforts and money. And I don't see anything like this going on in the European level in real terms, real terms in the field of national security levels they tried and failed. I'd rather say they didn't try well.

Coming back to the draft of national ‑‑ draft of information security directive on the European level. The fact that information society services were excluded and only Internet Exchange points were included into this directive is to the the end of the game. Because, now this directive is discussed on the level of the European Council and the story is getting interesting. Because on the EU level, there is no agreement between the governments which obligations to impose and what kind of entities it should be imposed on. Because, there are countries with very strong tradition of cooperation, and ‑‑ or they claim to be such kind of countries. So, United Kingdom and Czech Republic are insisting that any entities should be excluded and the approach should be voluntary. France and Spain want any European country to regulate anyone including the information society services. Germany thinks that this should be left on the country level, so on the national level governments should decide what to do and whom to regulate.

So the result of those debates came out recently, and what I shout, I thought, oh, God, here we go again. So if you look at this ‑‑ I'm sorry, there is an awful lot of text there, but on the left you see the commission proposal with all information society services which were included, e‑commerce platforms, search engines, social networks and so on. European Parliament in the middle excluded everyone. What is going on now in the European Council? They propose to include not only e‑commerce platforms and Cloud providers and application, they propose to include also Internet Exchange points. National domain name registries and web‑hosting services. So, it's now even tougher than it used to be when this directive was just proposed by European Commission. And this debate is still ongoing. So, the major idea now is that maybe it will not be regulated like in the unified way across Europe, but as Germany proposes, it would be left on the country level. So if the United Kingdom doesn't want to include information society services to the scope of this regulation, it can just leave them aside. And countries which want to do this, are free to do so.

Well, and now I'm going to talk briefly about concerns which this all brings to multi‑stakeholder collaboration or voluntary approaches.

This explosion of information society services to this type of regulation, it goes without precedent. We never had this and I believe that this is truly disproportional and especially if it would be left on the country level because it would create legal uncertainty across Europe. And if we're talking about single European market and seamless switch and operating under the same terms in any country, this is not going to happen, and I believe this might create competitive disadvantages for those who want to operate in several countries in the EU.

Then the big question is transparency and enforceability of this reporting obligation and information sharing, what is trust? I mean, confidence‑building measures. Any information sharing needs trust. How are you going to impose trust? This is not something tangible that you can just go and check. It's like saying, I will force you to love me. Cannot. So, this information‑sharing cannot be really enforced because it can not be checked. If governments are going to check this, they have to impose also some fines and penalties. What is proposed now is when a company is not acting in good faith, it would be fined. But then it creates additional kind of pieces of this puzzle and it's not clear, how they are going to check these, what kind of fines are going to be paid, how they are going to check that company is actually following and reporting all the incidents? I still don't understand the enforceability of this. I think this is ridiculously unenforceable and can exist only on paper or create some precedence.

Then the question is compatibility of approaches. Because, well we know that cybersecurity is a global issue, and I know that this sounds lame, but it's global issue not only on paper or in the politicians speeches.

In the USA, they made attempts to legislate cybersecurity in the last two years. So, there is a framework and presidential executive order and there is no mandatory reporting obligation. There is purely not regulatory approach with some incentives for industry to comply. So, now we have incompatibility in the approaches between EU and US. And maybe if you are talking about global cybersecurity framework, it doesn't make sense. But it's not only about global cybersecurity frameworks in terms of regulation, it's also about how industry is operating. What have global service provider is in the US, okay forget about global service provider, what about application developers and storers which are in the US but want to sell apps in Europe? So which kind of regulatory approach will be applicable in this case? And then also what I mentioned, differences in the EU member states' approach, how does it actually feed the purpose of this regulation of this directive? Because they say that they want to create a single approach to cybersecurity for Europe. But if we leave on the national level, the question to decide which entities to regulate and who has to report mandatory, we are creating patchwork of different legal regimes instead of unifying approach.

I believe that the reason for failure in this sense is actually what I draw in global ‑‑ in grey cybersecurity pyramid. Because cyber crime is regulated and the cyber crime is the reason why all this was brought up. National security concerns are very strong but people don't know what to do. What to do, we can regulate cyber crime, so maybe if we strongly regulate industry, we might achieve kind of success, but at the same time. If we are thinking about regulation in this field, there have been no precedent, so, the title of my presentation, if we are moving back to regulation, I put back to inversed comments, because there is no back, we didn't have any precedent of regulating anything. So, all these attempts to regulate cybersecurity they are very new. What I mean, when I say "back," I mean that industries like telecom, or financial sector, they were regulated, and in the last seven or eight years there was a concept of liberalisation, the idea that we need a smart regulation, that we need to regulate on the only areas where this intervention is needed and why it's not needed we can just leave it aside, and I believe that Internet was able to develop to this extent because it was an unregulated entity ‑‑ unregulated environment where you had to gain the trust where there was no entry barriers in terms of economical things and, it was a pure competition, market‑driven, customer‑choice driven, and we all liked it. But what is going to happen now? How we are moving back from collaboration and operating in totally unregulated environment back to regulation? How are we going to replace this bottom‑up approach with top‑down approach and wouldn't it create a very dangerous precedent for any other areas just using cybersecurity?

So, I am still getting my head around the question, what should be the role of the industry in this type of security governance? Why are they doing something without even talking to the industry and how you can actually reach governments and talk about this. And, yeah, this is the question probably I don't even have time to answer and give my ideas.

So, thank you very much for listening and I would be happy to answer any questions if you have them. I hope you do...

(Applause)

CHAIR: Thank you and we do have about three minutes for questions. Any questions?

AUDIENCE SPEAKER: I might have one. Carsten Schiefner. Thank you very much for that presentation. I just wonder whether there will be time in the Cooperation Working Group potentially like a spill‑over to have a little bit of a discussion about your suggestions, your ideas, your hints what ‑‑ logical each individual here in the room can potentially do to make that unhappen, this kind of like going back to regulation when it used to be just cooperation or... like the multi‑stakeholder approach. So, I just wonder, whether you would have time say, on, I guess it's on Thursday, to give some of your ideas during the Cooperation Working Group then.

TATIANA TROPINA: Yes, definitely. Yes. Thank you very much.

CHAIR: If there are no further questions. Thank you very much.

If we could get our next speaker to the stage, Raymond, we are going back to overlay networks again now.

RAYMOND CHENG: Last talk. So, my name is Raymond Cheng and I am here from the University of Washington to tell you about a project that we have been working on called uProxy, which is a lightweight proxy server that is packaged with a browser extension that you can share with your friends and family. Some work that we're doing with some individuals on this list, as well as our friends at Google Ideas.

So, let me just jump straight to the few things I want to you take away from this talk and the first of which is that I believe users need more control over the path that they take on the Internet. So, the Internet is a highly interconnected space, but one property holds true, and which has held true since the beginning of Internet, which is that users, when they accession a site, have no control over the path that they take from their machine to the desired site, and so what that leads to is some interesting properties, one of which is if you are in the middle of the network you have an immense amount of power to be able to monitor, surveil, filter, censor, manipulate or even attack the users.

So, one of the things that what I believe needs to be true is that we need to introduce tools as a community that drastically lowers the bar to allow non‑technical users to exert more control over the paths that they take on Internet continued to able to reason about what's happening so that even the most basic user can try to access, otherwise disrupted sites.

A new proxy is a step towards that. It's a Chrome and Firefox extension that we're building that essentially exposes some sort of contacts, perhaps you have logged into a social network, such as Facebook, it will look like what you'd expect out of a web app. You'd be able to request the availability to proxy through one of your social contacts, which, you know, assuming that the other side is also accepted, that would establish a secure encrypted tunnel between you and your friend through which you can proxy all of your browser traffic through them before it actually goes out on the Internet.

And one of the things that we're trying to do with uProxy is actually to have a focus on the user, to be able to design the product is that it's extremely easy to that non‑technical users can configure the product in a way that makes sense.

So let's take a quick step back and ask, well, why do we need a tool like this? As I mentioned earlier, somebody in the middle of the network has an immense amount of power to dictate what you can see and what happens to your connection. One of my favourite examples was, back in 2010, there was a tool that was released called Fire Chief, I don't know if you remember this, it provided hours of enjoyment, where it essentially made it exceptionally easy for somebody to sit in the middle of a coffee shop and have this Firefox extension running, be able to snoop on http traffic, extract cookies and then easily log in as the victim in whatever site that they are accessing at the time. This might sound like a fun little toy, but let's fast forward three years from then. What we know now is the same spirit of vulnerability. We know that entire nation states with a large number of financial and human resources are surveilling at math and can track entire populations of the world. What this essentially points to is that universal encryption just isn't here yet. There was a report that was released back before the leaks that showed that peak north American traffic was about 2.29% encrypted before the leaks and just earlier this year after leaks, after all of the news and the hubbub, that number has jumped up to 3.8%. So, clearly we still have a long way to go.

It's hard to talk about network disruption without talking about censorship. I know that this community has had some talks about this in the path. I can imagine how difficult it is if you are someone on the Internet and you are trying to gain access about some kind of news or some kind of site, and having your ability to be able to seek the truth and what knowledge you absorb being shaped by somebody else.

And so, what we know about Internet censorship is that the problem is not getting any better so the freedom house, back in 2013, released a report that showed that 19 countries now have some kind of blocking of social media or communications tools, and we're up to 29 countries now that have done some network level manipulation based on political, social or religious content.

And if I haven't thrown enough motivations out to you already, I'll throw one final one out which is that Internet outages are common and it's something that's been very well measured in the community. This is just some results of a study that we did at the University of Washington by some of my colleagues, and we were measuring essentially a set of 2 million outages over two months from the vantage points located in four different AWS regions and we showed that a large number of these outages were partial if that some vantage points were able to access that point in the Internet but others weren't. And we were also showing that a large number of these outages last for quite a long time. And so, while I don't want to go into too many details, we know that this may happen because of route convergence delays, prefix hijacking, etc., etc., what it essentially tells us is well placed relay and your ability to proxy through another point in the network may potentially lead to better performance or route or round failures.

So, I'm not claiming that any of these problems are new. A lot of people have looked at them. In fact there's been a lot of interesting research techniques of how one might give users the ability to solve some of these problems. At the same time, there's even tools and commercial tools as well as open source tools that help users achieve this. But I think this represents a unique ‑‑ we now live in a time that represents a unique opportunity to revisit these things.

First of all, we have unprecedented number of social media and communications tools now. It's hard to convey how connected we are today. And it's not just the fact that users can connect on these tools, but it ‑‑ these tools actually provide a basis for us to be able to arrive some higher level of identity or pseudonym and some notion of somebody's social network or set of acquaintance as and some of the negotiating in a boot strap off. Here we are talking about a higher level than IP addresses and machines.

The web ‑‑ secondly, the web is, in essence, becoming the de facto right once run everywhere platform. It's incredible what we have seen in the past decade of how easy it is to deploy software, both for the developer and how easy it is to consume software from the consumer's point of view. It drastically lowers the bar for developers to deploy what they're working on.

And perhaps you'd be surprised at just how powerful these tools are now. I mean, we are writing some pretty incredible networking stacks entirely in JavaScript and what these web standards are providing for us, are the ability to write advance networking stacks inside the browser inside using JavaScript.

So let's take a look at the model in which we're analysing this problem. And you know, here we're really looking at the goal of how can we provide access to a more open and free Internet to more of the world's population? Again, as I mentioned earlier, you have no control over the path that you take from point A to point B, so, if set of users in some unsafe environment and there's some entity on the network that is, has some sort of control or it can exert some control over the network, then that can lead to filtering, that can lead to surveillance, monitoring or failures and wide outages.

Perhaps generalising some large class of tools that already exist. We are proxy servers, but there exists some issues with proxy servers. First of all, if you have ever set up a squid server yourself, you'd know that that's not something that I would ask my parents or much less my grandparents to be able to do. They can be difficult to configure. They are hard to set up. And we don't ‑‑ we wouldn't expect everybody to have one of these things.

What I may expect my parents to be able to use is some kind of packaged client that would communicate with some centralised proxy service such as a commercial VPN. But here we may encounter issue with scale and trust. So, in terms of scale, well, if there's a lot of users accessing this proxy service, it could be as easy to block an numerate as it is the sites that you are trying to gain access to in the first place.

Secondly, you have to be able to trust the proxy server. You are placing as much trust in the proxy service as you are the network itself, so a malicious proxy, could be able to, in theory, do the same types of attacks that we're trying to guard against.

And conversely, the proxy needs to be able to trust the users because if anybody can access this proxy, it's possible that they'd be doing some sort of illegal activity.

So uProxy is an experiment we consider in distributed proxying and distributed trust. In essence, what we're trying to do is make it easy to deploy a proxy server amongst a large set of users, and essentially the way that we're deploying is by release it go as a browser extension. Secondly, what we're doing is we're making trusts explicit. So, we provide support to connect to a number of different social networks, including chat networks such as XMPP, perhaps over e‑mail, perhaps over Facebook, and what that essentially provides you is a way to discover your friends and be able to express consent, not in terms of an nexus control list or configuration file but actually in terms of do I trust this user to provide proxy access for me or do I trust this user to proxy my traffic through?

It also serves as a rendezvous point so we don't necessarily ‑‑ we don't want the user to necessarily think or have to reason about the low level detail such as NAT traversal or IP addresses. It provides a signalling channel for us to establish a peer‑to‑peer connection. And that peer‑to‑peer connection would be encrypted to one of your friends, which ‑‑ who would then be able to relay the traffic out to a desired site such as YouTube.

The last thing that we're trying to do here is we're trying to write a tool for self empowerment and not a service. And so essentially what this gives users the ability to do is set supply their own networks very easily. So some use cases that you can imagine is well by doing this, you are essentially allowing users to be able to see the Internet from your point of view. So, if you are trying to access some sites that may be censored in your region, you could proxy through your friend and access the Internet as if you were sitting in the same wi‑fi negotiating as them. You could imagine this being useful as well for travel. Say you are going to someplace with unencrypted wi‑fi, such as this hotel here and you are not necessarily sure who is monitoring the traffic, you may be able to proxy their traffic back to a machine you control at home. What we're not trying to provide is anonymity. And Stephen gave a great talk earlier about Tor, and there is no reason why you shouldn't be able to use this in conjunction with Tor to be able to provide anonymity. The second thing is, it doesn't guarantee safety, right. If you were proxying yourself to a region that's less secure, less private, more monitoring, that's something that's totally possible, it's a matter of sharing pathways on the Internet.

So, I mentioned earlier that we really wanted to get the user experience right so I wanted to share some of our user design MOX. So we have an introductory panel when you first launch the extension. You'd be able to connect to some set of social networks. And then you'll have some set of contact lists, as you see on the right. After which, say, perhaps, Alex wanted to ask [Rutu] for access to her proxy, you can ask for access, radio you this can then accept or reject that request and then you'll receive a notification when you actually start proxying that your Internet traffic is going through them. On the other side, you can see who is connected to your device, you can stop any one of those connections, you can pause your ability to share at any time. And you can select some sub‑net of social networks that you want to use for boot strapping.

To describe a little bit about the architecture of how far we designed this thing. Most of it is written in JavaScript. We have a UI, we have some core logic, all of which is platform agnostic, it would make it extremely easy for us to port the same logic to crime Chrome as to Firefox extension, as to node JS to operate it in a headless fashion. We have social network support to which you may connect to some arbitrary notion network. Use it as a way to discover your friends. That acts as a signalling pathway for you to be able to set up the actual transport mechanism where the peer‑to‑peer connection, which may require some Sun servers for NAT traversal in this particular case what we have implemented so far is actually using web RTC, which if you are no familiar, is a new web standard that provides encryption, congestion control, in order delivery, NAT via verse al as a web standard that you can access from JavaScript.

After which, the uProxy core logic here actually exposes a local proxy, which once the connection is established, you can set your proxy settings to direct all of your traffic through your peer before it actually goes out onto the Internet.

A couple of things to note here is that one of the things that we implemented is actually obfuscated web RTC, that allows us to provide a module that will take in buffers from the web RTC stack, be able to perform some kind of transformation on it, such as format transforming encryption which will allow us to fit a regular expression before it goes out onto the network. We use that as a mechanism to be able to control what is actually seen on the wire.

We also want to be able to support third‑party plug‑ins, so, we're not going to be able to support all of the social networks that our users may want to use. So we have some generic social API and transport API by which third party developers can come in and write additional support for other social networks, our obfuscation proposals or even for advanced dovelopers other sort of peer‑to‑peer networking stacks.

So, about a year ago, we announced this at the 2013 Google Ideas Summit. Back then, web RTC was barely working, so, we have made a lot of progress in the past year to build out our Chrome and fire fighter extensions, we staffed up our team, we have done some security reviews, it's all open source on GitHub which you are welcome to take a look and started working on an obfuscated peer‑to‑peer transport.

Looking ahead. By the end of the year, we plan to publish to the web store to release it to more users, we are currently working on a mobile client particularly for Android, using Cordova to be able to use a lot of our same existing logic but have a client that works on mobile devices. Something that I am excited about that we are just started working on is uProxy for Cloud which essentially allows you to deploy that core logic into AVM in a headless fashion so that you know, you can use that to proxy your traffic through that VM or you could even share that with your friends in the case where perhaps you may not have a friend that's in a desired region that you'd like to proxy to: And we have a party plug insupport.

So there's the website. All of our code is open source, you are welcome to take a look, feel free to e‑mail us any questions, what I'm particularly excited about to present this to the community for is I want to hear from you guys what you think is most interesting about this project. What should we be working on? What are the ways in which this could be the most impactful, and how can our existing software help you achieve the types of things that you do. But ultimately, what I think this represents is, we have a new opportunity to be able to explore networking services that are deployed in the browser, in JavaScript and uProxy is an example that have where we are trying to give users the ability to route traffic through a friend and possibly around failures in the network.

I'd be happy to take questions.

(Applause)

CHAIR: Thank you very much and we have got time for questions.

AUDIENCE SPEAKER: Hello. As far as a concern, this type of process is easily compromised because most of people prints 'yes' every time they get some confirmation via social networks, and I think that your proxy would be very quickly transferred into a botnet map. And there would be DDoS attacks via your friendly accounts.

RAYMOND CHENG: I think it's fair to say that ‑‑ I mean, proxy servers are not new, right, so anything that we're doing here is not necessarily exacerbating existing attack models as we'd expect. I think the goal here is really to make it easy for users to be able to set them up and be able to use them in a way that would augment their experience and be able to find more content on the net. I'm not necessarily sure if that answers your question. Perhaps ‑‑

AUDIENCE SPEAKER: I'm not sure, too. I'll see you later.

RAYMOND CHENG: I'd be happy to talk with you off line.

AUDIENCE SPEAKER: Philip from the RIPE NCC. I have a question from James Blessing: What does the content owners think of the new tool for getting around geolocation?

RAYMOND CHENG: Well... ultimately, again, I think proxy servers are not new. If it's about getting access through don't. I don't see there is anything wrong about seeing the Internet from another perspective. Ultimately, it does change the aspects of Internet services that require geolocation, and to some extent we are thinking hard about how we may be able to do this in a way that doesn't break, services that depend on that.

AUDIENCE SPEAKER: Randy Bush. I hate to pick a nit, but way early, two million outages a month. What's an outage, a circuit bounce?

RAYMOND CHENG: That's a fair question. Essentially what we did was, we had vantage points, as I mentioned, in four AWS regions, I believe we were choosing the five routers in the 15 most highly connected ASs and essentially doing traceroutes to them. We considered an outage a time when we weren't able to ping that router for I believe over 90 seconds. But I think it did surprise us how frequently these outages did occur.

AUDIENCE SPEAKER: I have a question online about the deployment or the distribution of this product. If the main objective is to avoid censorship, then how are you going to avoid censorship by distributing this product?

RAYMOND CHENG: That's a great question and I think that's a question that every piece of software that is trying to address this has to solve. At the end of the day, we're currently all open source, we're giving the software away for free, it's going to be on the web store, at the same time these sites may be blocked but I think history has shown that, you know, you can be surprised at how good human beings are at distributing software when they need it. So, that's not necessarily a problem we're worried about at the moment, but it's certainly something that we should be thinking about.

AUDIENCE SPEAKER: My name is Mark. I actually run a commercial VPN service. Are you going to spend any time and effort educating your users on exactly what they are opening themselves up to legal as far as letting people go out saying by doing this you understand you are opening yourself up to you know, DMCA requests, other legal requests, etc?

RAYMOND CHENG: Absolutely. That's a great point and that's something that anybody who operates a proxy server or a VPN has to think about. One of the things we're working on is thorough documentation that's linked with the product itself that tries to explain as much as possible what the risks are and what essentially you are doing when you use the software.

CHAIR: I am closing the mics now, behind Martin.

AUDIENCE SPEAKER: Coming back to our outage detection, have you made sure you didn't just run into an ICMP packet limit when doing pings and traceroutes?

RAYMOND CHENG: The point that I wanted to make is outages are common. I don't think anybody here can really deny that. If you are looking for more specifics about how the study was done, I did reference a citation of a paper where we actually describe more of how that was done, and I'd be happy to chat with you more about that off line.

AUDIENCE SPEAKER: Martin Levy. If I may sort of a two part question to this. You got asked a question before, and I want to reask it, about content. I want to confirm two very specific examples.

If this system exists, can I get the BBC iPlayer only available within the UK from some random place as long as I have a friend in London? That's sort of the first question. And conversely, can I get to NetFlix in America if I have a friend in America when I'm sitting in, again, some random country that doesn't have NetFlix. So that's the content question. And if the answer is yes to those, both those players will have a serious problem, as they do with other VPN players, you are a lot more distributed but they'll have a problem.

The second question is: Why do users, in the general sense, not those ‑‑

RAYMOND CHENG: Sorry, just so ‑‑ let me go ahead and answer the first one. I think what I want to do is reference the answer that I gave to the guy actually right before you is that there's going to be local laws and the laws in every country are going to be different and it's important for people to understand what they're opening themselves up to. Now, at the same time, you know, this does not provide anonymity, it's not necessarily the fact that it's going to aid you to do any kind of illegal activity. I think at the end of the day, if you want to do something like that, there's much better tools for you to do that. At the same time, when it comes to censorship, a lot of forms of censorship are censoring things that are not actually, in fact, illegal in the laws of that country. A lot of times it just happens to be that blocking an entire domain is just the blunt tool by which they try to achieve the means they're trying to do. So it's a lot more nuanced and I'd be happy to talk to you about this off line, but I think, at the end of the day, it's both a human ‑‑ a legal and a technical problem. But, why don't you go ahead ‑‑

AUDIENCE SPEAKER: I'll take my second question, as time is short, off line, and get you, because I actually want to argue more about your response. It is a brilliant, brilliant political response, and yet, I don't feel like it addressed the specific issue. But I will deal with this off line and I'm sure other people will as well.

CHAIR: We have got time for your second question, Martin, if you want to ask it.

AUDIENCE SPEAKER: You are so going to regret that. If you could go back to one of your first slides it says letting users ‑‑ users need more control over their paths on the Internet. For a large number of people in this audience, that makes them shudder, because that, in an English, in the language there, talks about well the physical infrastructure, the way we route, we talk a lot about Internet routing within the community. First of all, I think you want a different way of saying it but two, I think you need to define what those users are. If, for example, you are dealing with a user the way the Tor network does, inside a country with heavy censorship or limited access, that's different. But if you're talking about general users, your example was getting parents and grandparents to install this, then I think that worries me, and I think it may worry a lot more people inside this room than just myself.

RAYMOND CHENG: I would love to talk to you about what concerns you. At the end of the day, I think there is perfectly legitimate reasons why it may help a user, as I was trying to motivate the problem in those slides subsequent to this why users may want the option to be able to do some kind of detour routing and a lot of the cases that's very helpful. It may not be useful for everybody and it may not be something that we want, you know ‑‑ clearly this is an overlay network, I'm not talking about anything that is, you know, lower down in the stack. But again, if you're worried, I would be happy to be chat with you after this. Thank you.

CHAIR: Thanks very much, Raymond. That was fantastic.

(Applause)

Okay. That brings us to the close of our Plenary content for today, and tomorrow it will be Working Groups, of course. We're not quite done for today yet. Three lightning talk slots are still open for Friday, so, we haven't picked those three lightning talks from the PC yet. So, please submit lightning talks if you have got something interesting to talk about, take advantage of the slot that's become available.

There is a social event tonight, generously sponsored by those lovely folks at LINX, that's in the west end at Jewel Piccadilly. The buses will leave from 18:30 and run every 15 minutes till 20:30 to take you to the venue. That's from the bus lobby downstairs, and there will also be buses returning back from the venue late in the evening.

If you are feeling more adventerous and you you have got an Oyster card, you can take the Tube to Piccadilly Circus.

We have got two because of this evening. In this room, we have got the RACI BoF, and this is to bring interesting people from the academic community to talk about projects here. There is two talks, and one of them is about using flow‑based detection for SSH, and then the other talk is about IXPs, and is depeering the right thing to do. So, that's one option in here.

The other option is, and this is aimed at people that either operate network operators forums or maybe want to start a network operators forum in the region that they are located in, that's going on in the side room, and unfortunately I am moderating that. Otherwise, have a great evening, a pleasant evening, and we'll see you all here tomorrow. Have a pleasant and safe evening. Thanks.

LIVE CAPTIONING BY MARY McKEON RMR, CRR, CBC

DOYLE COURT REPORTERS LTD, DUBLIN, IRELAND.

WWW.DCR.IE