Who’s knocking? Profiling recursive resolvers on authoritative name servers
Recursive resolvers are often overlooked when it comes to their role and importance in the DNS (Domain Name System). Also known as DNS recursors, they act as middlemen between clients and DNS name servers — asking around the internet in search of the answers to client queries. Their caching properties are particularly useful for speeding up such searches.
Resolvers can serve different client bases, ranging from end users who want to visit their favourite video streaming websites to scripts that crawl the internet for marketing reasons. Knowing which resolvers are most suitable for serving their main clients would allow operators to understand how they should set up their server infrastructures to optimise interaction with those resolvers. It would also be useful if we could combine our classification data with other signals, such as RFC 8145, to identify important resolvers that don’t have the correct DNSSEC trust anchors configured, because the prevalence of such resolvers was problematic in the run-up to the last root KSK rollover. Like our colleagues at .nz, we at SIDN Labs are working on a project that involves the classification of recursive resolvers to increase our understanding of such issues. The main difference between our project and the .nz project is that we are trying to go beyond differentiating ‘real’ recursive resolvers from other resolvers — that is to say those used only by monitoring tools. Our intention is to label various different kinds of resolver, such as cloud providers, ISPs and so on. We are halfway through our research, but wanted to share some early results and collect feedback from the community with this blog post.
Dataset and feature selection
We want to classify recursive resolvers based on query data collected on the .nl name servers. In principle, however, data collected on any large authoritative name server should be adequate. We’re collecting queries and responses from two of our four name servers and storing them in our Hadoop-based warehouse. Feature selection is highly important — resolvers follow different patterns when querying .nl domain names. For example, while 82 per cent of the queries sent by 20 per cent of resolvers relate to IPv4 (A) or IPv6 (AAAA) addresses, some resolvers query almost exclusively for NS records. We have used such differences to select 22 features that we think are beneficial for analysis. Some features, such as the share of A-record queries, are straightforward; others need some pre-processing. For example, one of our assumptions is that ISPs’ resolvers are well maintained and adopt new standards promptly. For instance, we are also trying to detect whether resolvers support the privacy-enhancing standard QNAME minimisation. We have collected data on 22 distinctive features of nearly 1.4 million unique resolvers over the course of a single day. We have also created a ground truth dataset consisting of Autonomous System Numbers (ASNs) for known ISPs, hosting companies, cloud providers and so on. Creating this ground truth allows us to measure the accuracy of the clustering algorithms.
Figure 1 — Share of queries asking for the A records of domain names. Figure 1 shows how resolvers have distinctive query features. We can see that the IP addresses of TransIP, which is in the ‘hosting companies’ set within our ground truth, show the most unique behaviour, not only in relation to query type A, but also in relation to numerous other features in the dataset. Such distinctive behaviour on the part of hosting companies has enabled great accuracy to be achieved in the clustering phase. We ran various clustering algorithms, resulting in the successful identification of numerous hosting companies. However, we were unable to clearly distinguish other types of resolver. We think that, in some cases, clustering might be affected by noise in the dataset; for example, not every IP in an ISP’s AS is necessarily a recursive resolver serving end users.
Evaluating more features
No exact method has yet been defined for classifying recursive resolvers. However, our research observations to date enable us to conclude with some confidence that it is possible to classify recursive resolvers. Our next steps are to evaluate more features with a view to improving the classifier. Feedback and/or ideas on how we should go about that are welcome.
This blogpost first appeared on apnic.net.