As you are likely aware, the number of generic top-level domains (gTLDs) is about to increase dramatically. In June, the Internet Corporation for Assigned Names and Numbers (ICANN) announced that 1,930 applications were filed for New gTLDs (and although six applications were recently withdrawn, that leaves 1,924 applications in play). Domains that might go live in the months and years ahead include .CLOUD, .BUY, .BOOK and .APP, which received 13 separate requests to be delegated as a gTLD.
This means that the Internet is now the world’s biggest boomtown and, as you might suspect, a period of adjustment is inevitable when a small town of a few hundred becomes a home to thousands -- in this case, thousands of new gTLDs.
As a quick refresher: root name servers are part of the Domain Name System (DNS), the worldwide, distributed database that is used to translate unique domain names such as http://www.securityweek.com to other identifiers. The root name servers publish the root zone file to other DNS servers and clients on the Internet. The root zone file describes where the servers for TLDs are located.
ICANN’s report was published in response to a request from ICANN’s Governmental Advisory Committee for a comprehensive analysis of the issue, including all underlying data of root zone scalability. As an active member of ICANN and a contributor to the report, I’m pleased to say that the report draws a happy conclusion: The root zone can grow in a stable manner -- the boomtown that is the Internet will be able to handle the hundreds upon hundreds of new gTLDs that are soon coming.
The report draws on a number of factors to conclude that the introduction of new gTLDs will not compromise the operation of the Root Zone. Those factors include a survey of Root Zone operators, the fact that the performance of the Root Zone is mainly predicated on the number of queries rather than the number of actual records (gTLDs) within the Root Zone, and previous studies that have reached the same conclusion.
Even though the report is now published, it will be amended in response to ongoing questions and requests for clarifications to ensure long-term objectivity. To that end, the report is being treated as “living document” that will continue to evolve with close collaboration among members of the ICANN board, community and staff.
One important caveat: the root zone evolution to focus on over the next year isn’t the number of new gTLDs but, instead, the rate at which those changes happen.
The rate of change is significant because the Root Zone is comprised of a set of resource records for each TLD. Over time, the Root Zone has proven its ability to accommodate the introduction of numerous new developments, including the first two rounds of new gTLDs, the introduction of IPv6 glue records and the deployment of DNSSEC within the Root Zone. At the same time, there is a tendency for the number of name servers per TLD delegated to increase as the TLD name server infrastructure matures.
Another, perhaps more important, factor relating to the rate of change is the number of changes that need to be made to a TLD’s set of resource records over time. The overall Root Zone publication system today is staffed and tuned to support an accepted service level. As the number of TLDs increase, so will the maintenance requests for changes to a TLD’s resource records. The root publication system should be audited and monitored to confirm that its resources can support an increase without degradation in the current service level. The ICANN Security and Stability Advisory Committee (SSAC), of which I am a member, has made this observation twice: once in a letter to the ICANN Board on July 2, 2012, and once in the SSAC report, SAC 046 – Report of the Security and Stability Advisory Committee on Root Scaling.
While ICANN has imposed a growth limit of 1,000 new gTLD delegations per year, the focus should not be on the maximum number of TLDs that are added. The focus needs to be on the frequency at which the new gTLDs are added to the root zone. It simply is not feasible to add 1,000 new gTLDs all at once. The rate of introduction of these 1,000 new gTLDs and the processes and systems that enable a smooth introduction is what requires serious effort.