November 11, 2014

Spectrum Access and the Public Sphere

In this time of peace, plenty, and the rule of law, it is less likely that you will be inconvenienced by barbarian horse archers than a planned bypass road that the local city council has not told you about. Or perhaps your country is secretly trading away food subsidies for increased export quotas. Though we are equal before the law, inconsistencies in its access remain. The ability to hear and make oneself heard is essential to find common ground, and common ground is a necessary prerequisite for action in a democracy.

The preeminent position of information depends upon the possibility to act on it. This was however, not always the case. Of the rights we enjoy, different ones could be credited to different things to different degrees, depending on where you are: the Magna Carta, the Dutch Revolt or perhaps Gandhi’s Salt March. But wherever it may be, the fact that these rights are maintained today is largely due to the moderating influence of Habermas’ bourgeois public sphere: the network of friends, neighbours and colleagues and the larger network of the county, city, state or country that share a connection. The existence of these networks in their modern form is immoderately due to certain special resources: ideas, material, and the one this article is about, whose access is to a greater or lesser extent controlled by one of the most powerful entities in history – the modern nation state. Though the dispersal of power originating from the principle of ‘one man, one vote’ is the most essential ingredient for democracy, power tends to resist dissolution. One way it does so is directly through the control of information; another is through the control of the resources required for the exchange of information.

Access to resources has been used as a differentiator to distinguish between stages of human advancement. Armchair historians colloquially use stone, bronze and iron because their respective wielders were the strongest in their time, progressively separating us from other apes and giving us mastery over the wild. Soon our technology made us the only apex predators worth fearing and fear made war a way of life for most of recorded history, because no amount of industrial production could substitute for mutual understanding.

The ability to hit things with other things is, thankfully, not our only distinction. Human curiosity and the desire to explore are natural counters to distrust, the springboard of conflict. But again, these like all positive urges can be manipulated and therefore the relationship between information and power has seldom been innocent, ranging from the World War II propaganda against the ‘subhuman’ enemy to the manufacturing of consent through modern broadcast media to legitimise wars of aggression, and the samizdat of peer-to-peer communication that undermined and continues to undermine those efforts on both sides, temporally speaking, of the fall of the Berlin Wall. The latter forms a contrast to broadcast media which, even if nominally independent, was expensive and had to be centrally operated, making it manipulable through bribery or coercion by entities with the money or power required to do so. The promise of new media is that it is harder to target at a single point. Though this feature is to some extent negated by walled gardens like Facebook, it has nonetheless left an imprint on the dawn of the 21st century. This time, however, it is more disruptive than destructive; it is not about revolution from the barrel but an evolution of the hegemonic nature of the public sphere through conversation and shared experiences; this, more than any before it, is the age of ideas, when the average plebeian has access to a far greater range of ideas than her counterpart from an age, or a century, or even fifty years ago. Credit must go to three things: first, the vanishing transaction cost of sharing on modern communication networks makes broadcasting virtually free. Second, inexpensive hardware: The ubiquity, resilience and dirt-cheapness of mobile phones have significantly increased the power – to organise and negotiate – of people from a large spectrum of incomes from around the world. The third and final reason could be held responsible for the fulfilment of the first two: the shift from government being ‘planner and controller’ to ‘regulator and facilitator’ of the resources required to communicate, progressively stepping back from telling people what to say, whom to say it to and what to say it with – in other words, stopping state censorship, refusing to impose information transit taxes and easing the manufacture of hardware by allowing optimal access to intellectual, material and human resources as well as a big enough slice of the radio spectrum.

Old Habits, New Concerns

Back when pamphlets were the primary means of propagation, all that was needed to participate in any regulatory processes was a certain knowledge of human nature and the ability to use a quill. Though the former has not changed, the technology used to express it has changed spectacularly, and is now all but essential for those processes to work efficiently. Understanding and regulating this technology now requires multiple lifetimes spent studying all its facets. To know what is possible within the technical constraints of even something as specific as wireless internet or the client/server model, requires mental gymnastics, the likes of which the Enlightenment’s scholars never imagined. The flavour of this change is elegantly expressed by Lawrence Lessig’s Code is Law, where he talks about the encapsulation of law in the code that runs, among other things, the internet.1By which one could mean either the actual code or the protocols: cp. Lawrence Lessig, Code: Version 2.0, New York NY, Basic Books, 2006. Our anonymity (or lack thereof), for instance, depends upon that code, and once that code has become part of a standard, we are stuck with our choices. We will look at something similar: how we code our access to a public resource – the radio spectrum – into regulation, and how that could influence the way we communicate and in doing so, our relation to corporate power and the state. Some, who at an earlier age might have shaped this relation, have of late become suspicious about things they do not understand.

The realpolitik of resources and control has habitually used – and tainted – the discourse about the extent of power that a sovereign state should be able to exercise with respect to its citizens. So although expanding and enhancing the public sphere everywhere is of obvious interest to every liberal citizen, the route to doing so is a delicate one.

The solution lies in the very broadening of the public sphere that technology provides. It has had a clear line of progression from Radio Free Europe to CNN’s embedded journalism through the Western coverage of the Syrian crisis and the non-coincidental rise of Russia Today, the Occupy movement and hashtag revolutions: the genie has slipped the grip of the State – any state. The latter has only been possible because unlike the limited choice available on traditional terrestrial broadcast, distributed networks give people true choice – opposed by things like paywalls, copyright and proprietary standards. They have also been instrumental in breaching the wall that separates ‘us’ and ‘them’ for those with access – a sort of reverse ghettoization. In India, local FM radio stations like the one run by SECMOL in the northern Indian state of Ladakh, have allowed young people in poor, underserviced areas to transcribe their own experiences into the national public discourse. It also helped them to identify broader local trends from anecdotal experience (“so you don’t have teachers in your school either?”), enabling them to police their own local government.

Elites and Elitism

Let us look at two organisations working for the enlargement of the public sphere in India but targeting completely different demographics. Not long ago, the Bangalore-based NGO Janaagraha launched the app IChangeMyCity, whose purpose was to enable citizens to provide the city corporation, which is responsible for the creation and maintenance of infrastructure, with feedback on whether or not contractors it employed to fix stuff, actually fixed the stuff they were supposed to. Some might accuse Janaagraha of being elitist, because the app only runs on smartphones and needs a data plan; however, not long after it was released, smartphones assembled in India that cost under $100 began to appear in the market. This still excluded all the people who do not have smartphones. It turns out that those devices’ GPS chip is key. Though it is possible in principle for users to simply call in and send their coordinates triangulated using their cellular operator’s nearby towers, cellular operators do not give away that data freely, and buying it would greatly increase the cost of the scheme. Because location data is outside the public sphere, poorer users are excluded from this particular method of governance.

CGNet Swara2CG stands for Central Gondwanaland, the vast swathe of mostly forested central India with few or no towns and little in the way of civic infrastructure. facilitates citizen journalism from people living in central India, who otherwise have no way of being noticed by the media. Anyone can call in and narrate their experiences, which will then be transcribed and transferred onto CGNet’s website – and thereby into the public sphere as it exists in the cities. What does this say about elitism? The benefit accrued by this act is twofold: firstly, city dweller elites could understand the point of view of the people from the rural heartland, under whose feet the coal which fuels the power plants that power their internet connection, lies. And secondly, the elites (by their position and numbers) could influence state policy; they are the public sphere.

It is gratifying for non-anarchists to see common desires expressed in both portals, intersect with concerns about government not delivering on promises about infrastructure.3And big-government advocates should note that none of this would have been possible without concentration of capital in corporations creating a market for International Business Machines’ first mainframes, and the decrease in the production cost of computing technology due to research conducted by them and other manufacturers led to the ‘invention’ of the Personal Computer, a computing device that wealthy individuals could finally afford. It took a decade or two before computers became … Continue reading This is all the more true when it comes to radio technology and the use of spectrum, which I will talk about in the second part of this article. The fact that we can see this is a small if encouraging beginning made possible by current infrastructure. We will look at how that has come about – and the ways in which it can advance – in the second part of this article

The Changing Limits of Technology

Quartz radios are exceedingly simple devices: essentially, they consist of a dipole antenna, an amplifier, and an apparatus to tune the resonant frequency of the antenna. DIY kits used to be commonplace, and it’s one of the tragedies of the digital age that this early exposure to the innards of basic telecommunication no longer happens; you cannot take apart the digital circuitry of a GSM chip4GSM is the abbreviation for Global Systems for Mobile Communications and basically means a computer joined at the hip with a high-power, high resolution, two-way radio. like you can do with a quartz radio.

But that is just the magic of progress: what used to require a power-hungry, room-filling computing beast in the 1950s, now fits on a single wafer of silicon. The system on a chip (SoC) of the present day is at the tip of a gigantic supply chain, whose roots range from crude oil (used in the manufacturing of plastic) to the microscopic but essential quantities of rare earths used to dope the semiconductor that makes up the chip itself. Improvements in capacity due to the increase in the average device’s wavelength resolution (so that a smaller sliver of spectrum is reserved at any time or place for someone’s call or video download) and improved codecs have allowed us to do more than continue making calls – we can now watch high definition on the move. But as the cost of each incremental improvement in spectral efficiency increases, just as CPU manufacturers went from making faster cores to putting more of them on a chip, telecom providers are beginning the move from kilometre-wide tower ranges to micro-, pico- and femto-cells – to put it simply, smaller coverage areas. We will try and gauge the impact of that evolution on the public sphere. We will also look at the impact of the convergence of the Plain Old Telephone Service (POTS) and the internet.

Radiated strength falls as the inverse square of the distance. In other words, if you are twice as far away from a broadcasting antenna, the signal strength seen by your phone is cut by a factor of four. This means that there is a maximum distance, a certain radius, beyond which your phone will not reliably pick up signals from that tower. Ideally we should be able to tile the city with circles (each centred around an antenna) so that you are always within that maximum operable radius of some tower, but since we cannot tile a plane with circles we use the next best shape – a hexagon, the kind of pattern you would see in a honeycomb.

Back when you had to reserve a frequency band to which you ‘tuned’ your cellular radio for the duration of your call, the coordinator of this tuning was the cell tower.5Back when GSM technology came in. It still is the dominant technology for voice calls because of a combination of sunk costs by consumers as well as industry, and perhaps licensing issues. Coordination between neighbouring towers being too complex for the technology of the time, each tower was given its individual fiefdom – a chunk of spectrum that no neighbour could use – from which it could parcel out slivers to consumers for each call. How much spectrum would this need? The answer is related to the problem of colouring the hexagons in a honeycomb using as few colours as possible while still ensuring adjacent ones don’t share a colour. A little thought will tell you that the smallest number needed to satisfy these conditions are four – colours, that is. In other words, if each hexagonal cell has 100 concurrent callers, each of whom need a 100 KHz channel, so that you need ten MHz per cell, you would only need 40 MHz to tile your plane with cells.

Multiple access technologies like Code Division Multiple Access (CDMA) do not need a separate frequency band for each caller; instead, they allocate a unique pseudorandom code that allows any particular user to see every other user’s broadcast as noise as opposed to interference. While eliminating many of the problems of GSM (including reducing the burden of coordination and planning cell layouts) its spectral efficiency is still finite, and thus each honeycomb cell has an upper limit on the number of concurrent users. That was not a problem in the early days. When the first elusive mobile phone owners began to appear in city suburbs, antenna cost did not scale (down) with antenna power, and ownership was sparse – so getting the most powerful antennas to cover the largest area by putting them up on large, expensive towers was the logical choice. As mobile phone costs came down, the growth of the number of users per square kilometre used up the available spectrum. Though this was compensated for somewhat by the equipment’s increase in spectral efficiency, it still was not enough. Then, coincidental with the coming of the Internet, new services appeared which now seem so essential that it is hard to imagine how we did without them – for navigation, communication, weather, news and websites dedicated to pictures of kittens. All of these involve the transfer of data, which requires spectral bandwidth. As consumer demand comes close to the capacity of old networks, it becomes financially viable for telephone companies to replace old network equipment, and old ideas, with new ones.

Two radios sitting next to each other and transmitting at the same frequency would result in any listeners having an unfortunate experience. The same cannot be said about two adjacent telephone lines because each line’s electronic signal is contained within it, as is the light passing through the fiber optic cables snaking from telephone exchanges to cellular towers around a city. Spectrum over the air is scarce, whereas bandwidth with more cables is not; and as our discussion about hexagons in honeycombs illustrates, there are no a priori constraints on the size of the cell. This means that bandwidth available in some specified area is limited only by the number of cells covering it, therefore more, smaller cells is the logical way to go. Wireless networks have thus been forced to scale down. Does this imply a shift in the balance of control and ownership between individuals and corporations?

Scale and Centralisation

The global communication infrastructure that you are using to read this, requires a nervous system of fiber optic conduits, whose multi-fiber spinal cord stretches across the Pacific and Atlantic oceans. Its manufacture, deployment and maintenance required massive investment from transnational consortia. Efficient topologies – both of the fibre-optic net backbone and of wireless data services – also need a relatively centralised client/server command structure. As a final nail in the coffin of individual control, the technologically imposed scarcity of spectral resources compels the state to intervene directly in the allocation of the resource itself, to the point where the state, purely from the requirement of coordination, effectively monopolises spectrum for telecommunications corporations.

This is the 21st century equivalent of outlawing individual control of the printing press, although one has to add an imaginary constraint which compels the state to do so, like if books from two separate presses kept on the same shelf were garbling the words of one another. Minus the fanciful talk, the analogy is apt because people are much more likely to communicate electronically than via printed pages; the state is now in control of the dominant medium of communication. But all that changed when the state allowed consumers to broadcast in certain wavelengths without licences: this gave corporations (that manufacture the hardware) a market incentive to develop new technologies with which to broadcast in a way that did not require centralised coordination, and among others, those technologies were pooled into a standard that became WiFi (or WLAN in Europe) which uses frequency bands centred around 2.4 and 5 Ghz. In the meantime, these interference mitigation technologies have had time to mature for use in primary cellular services, and have also become mainstream, so any smartphone that can make a phone call is also capable of connecting to a wireless router using WiFi. Its success has been beyond traditional telecommunications operators’ wildest nightmares: by some estimates, two thirds of all smartphone data is sent using this messy, out-of-control technology through personally owned wireless broadcast devices (i.e. WiFi routers).6Cp. Phil Goldstein, “Deloitte: Two-Thirds of U.S. Consumers Prefer Wi-Fi over Cellular“, FierceWireless, November 26, 2013. Clearly, there is a shift in the wind when it comes to the regulatory industrial complex that has been controlling global airwaves, and we are yet to see where they blow old assumptions.

Though radiation at 2.4 GHz is attenuated much faster than the longer wavelengths like the 700 to 1.800 Mhz typically used by cellular operators, this is not a problem as far as smaller cells are concerned if there are broadcast antennas (read: wireless routers) inside people’s homes in the first place. In other words, we do not have to stop at personal use: citywide sharing of wireless networks is closer to being a reality. Wireless routers are not always distributed around a city in a perfect honeycomb pattern, however, and there are issues of trust and interference, because WiFi was primarily designed with a single set of users in mind. Plain vanilla WiFi also does not address issues such as handover (the real-time switch between one tower and another during the course of a phone call in a moving car). But enterprising startups such as Republic Wireless have arisen to fill the gap, and write the controllers that have allowed WiFi to become part of the cellular ecosystem in the past two years; adoption of their hybrid network consisting of available WiFi hotspots coupled with a commercial operator’s network is growing rapidly in the USA.

The rise of Republic Wireless would not have been possible without the explosion of WiFi hotspots that has happened in recent years which has been due, in large part, to another technological advance mentioned earlier: optical fibre. These fine threads of glass use light instead of electricity, and with the advent of dense wavelength division multiplexing (basically, an increase in optical resolution) are capable of a far higher data throughout than copper as well as being much cheaper bit-for-bit. Putting new optical infrastructure in place is expensive so, compared to copper, the minimum amount that a company would have to charge consumers to recoup costs would be relatively higher but the marginal cost of increasing their bandwidth would be relatively lower, so new data plans start out with much faster speeds than the previous generation on copper. If putting another wireless router in your house is seen to be more convenient than laying an ethernet cable to your neighbour’s place to access his excess bandwidth, then what you end up with is a lot of cheap, usable wireless bandwidth just lying about – bandwidth that could be used.

Manufacturers of network hardware decide what the networks of the future will look like. They design the antennas that go onto the towers as well as into your mobile phone. According to marketing material available online, next generation networks will involve two things: (large-wavelength) transmission from the old towers which provides comprehensive but low-capacity coverage, and some sort of wireless router analogue with better etiquette with which to operate coherently with others of their kind, broadcasting at lower wavelengths into smaller cells – which they will try to induce users to place in their property and run off the excess (or at the very least, cheap) bandwidth available to their optical fibre-based internet. Depending on how it is implemented, this could either be an enlightened step toward the future or a valiant attempt to keep telecom providers’ customers on a leash. The precedent is obvious; hardware lock-in is not new, and it has been illegal to ‘jailbreak’ your mobile phone for a while.

Another monopoly enforced by the state that has an important effect on the use of spectrum is the monopoly of ideas – the international patent regime. Even assuming two people could not possibly hit upon the same idea independently, one side effect of technological ideas initially being used for the highest paying clientele is that corporations that first implement them could consider the return on investment in serving less wealthy clients too small, and the latter could be entirely deprived of those new and better technologies – if the law allows the former to hold a monopoly on their manufacture. In our context, this could result in individuals being denied access to the improved WiFi analogue that telecom hardware manufacturers are claiming to have created.

Aside from disallowing control of property (by banning ‘jailbreaking’) and ideas (in the form of patents), the state can and has blocked its citizens from reaping the benefits of putting together multiple services in an intelligent way. As we have seen, the ability to integrate WiFi into traditional cellular networks – for instance by enabling soft handover to WiFi – has been around for a while, but the loss faced by telcos if their calls no longer go through their networks but instead through those of the internet service providers, has dissuaded them from adopting it. More than being unhelpful, it is possible that they have actively lobbied to impede user adoption of new paradigms: most glaring in India is the outlawing of calls starting from a computer and terminating at a mobile phone, with the gateway (link between the Internet and POTS) being within our borders. Suppose you, in Istanbul, use a VOIP tool like Skype or Viber to call someone sitting at her computer in Mumbai – that is fine. But supposing she is hooked up her computer to her brother’s mobile phone, so that whenever someone VoIP calls her on her computer over the internet, a script running on her computer calls her over the POTS – essentially transferring the data onto the POTS network in the form of a plain old phone call. Then she could be arrested, because this scenario has been outlawed – in effect, the state has intervened to massively increase the cost of a service; the owners of this outmoded technology, which you are instead forced to use, are the only beneficiaries. But smartphones use wireless data, and broadband internet penetration is growing exponentially in the cities. So it is only a matter of time before WiFi ownership reaches a critical mass and then, when enough people wake up, popular desire for a Republic Wireless could pop up overnight.

What if you do not use the POTS network, commercial internet, and even the centralised administration of client/server architecture? New and improved decentralised mesh routing protocols like BATMAN and implementations like ROBIN that allow networks to function without external, centralised coordination, combined with unidirectional WiFi antennas that can be used for two-way, point-to-point WiFi links – effectively, backhaul, the last ingredient required for a fully-functioning network (commercial wireline connectivity performed that task in the case of Republic Wireless above) – have allowed the founders of guifi.net to create a spectrally efficient wireless network across tens of kilometres of the Spanish countryside completely outside the ambit of corporate service providers of any sort. Clearly, in the spirit of the GNU General Public Licence (GPL), they also created the Wireless Commons Licence, a framework for cooperation between users of a shared network. Though internally the guifi.net is entirely separate from commercial connectivity, it can operate in tandem with it, and has to connect to an internet exchange point to access the internet. But everyone in the mesh can communicate within without paying any operators, and users join organically. The growth of guifi.net could be seen as proof of the concept that users can share wireless devices to constitute an actual communications network.

But what of people who cannot afford the outlay for things like individual WiFi routers? There is significant overcapacity in most of the individually owned nodes in the example above, so just like large telecom corporations did in an earlier age, cheaper technology should make it possible for smaller players to achieve financial break-even in sparser markets. In fact, this decrease in cost has already enabled non-state, non-corporate actors to help people living out of sight of both corporations and the state, step into the public consciousness.

AirJaldi is such an organisation. It began in the little hilltop town of Dharamsala in northern India, home to the Tibetan government in exile. Their approach to financing and building their mesh network has necessarily been more centralised as the population served is less tech-savvy, generally poorer, and situated on the mountains of a developing country. They are actually two organisations: one, profit-making, represented by airjaldi.com, and the non-profit, represented by airjaldi.org. The latter is supported by grants from funding agencies, but the former has actually achieved break-even in the market. Again, this has only been possible because of the humble point-to-point WiFi antenna – because radiating only in the direction you need to drastically decreases the power required and negates interference to receivers in other directions at the same time, this allows for a far cheaper backbone of communication relays to be set up between, say, a distant village and a mainstream telco’s network.

The Commons: A Tragedy?

There are problems with the current model of spectrum ownership and allocation where the state leases bands of spectrum over a pre-ordained geographical area to companies using auctions – spectrum which the companies cannot lease to a third party. The auction fees that the telco pays the state is a sunk cost and cannot be invested into its wireless network, resulting in operators focusing on areas with the highest return on investment. These are inevitably urban areas; the poorer and more sparsely populated portion of the auction’s geographical unit, whose welfare the auction fees supposedly go to, are deprived of a telecom network. But any method of allocating exclusive access, apart from an auction, would raise the spectre of favouritism. So the question of how best to maximise the utility of spectrum is a difficult one. One way to solve this conundrum is to look at it as a private resource and facilitate complete and permanent ownership and private buying and selling, just like land, which would have the expected benefits of immediately going to those who value it the most as well as the expected problems – speculative activity and long positions could cause bubbles and artificial scarcity. Also, regulatory overhead is inverted. The private ownership of intangibles has to be enforced by the government, as opposed to tangible things that you could fence off or lock up. The other attractive extreme would be to view it as a commons in a glorious technological future where people use intelligent devices possessing an ‘etiquette’ that allows them to speak to and speak past each other politely so that they can all function together in sweet harmony – Republic Wireless’ achievements are baby steps in that direction. Or perhaps they need a conductor to coordinate them, a central brain that is either owned by a consortium of companies or the state.7For an example of a database in operation, see this this TV Whitespace Database. Or perhaps, in the short term at least, the best way forward is a mixture of exclusive allocation in the form of a lease as it is done today, and usage rights to secondary users with remuneration to the telecom company that presently owns it, decided upon by the market, regulation or a combination of the two.8Cp. Jon M. Peha and Sooksan Panichpapiboon, “Real-Time Secondary Markets for Spectrum”, Telecommunications Policy, 28, 2004, pp. 603–618.

But a side effect of a state monopoly on spectrum is the strong incentive towards rent extraction – in other words, politicians have an incentive to charge telecom operators the highest prices they are willing to pay and use that money to finance pet populist schemes. For states to wean themselves off this source of revenue, seemingly produced out of thin air, would be hard. It is clearly not a problem with a trivial answer, and economists and engineers are discussing and debating this in various fora;9Like the New America Foundation in the US; sadly not so much in India, because the Indian government has been rather opaque about its decisions until now, though that is changing. the same spirit of scientific debate that informed administrators and the public alike, is in operation. If not the tree of liberty, perhaps the shrub of optimal resource utilisation for a broader and more resilient public sphere might be nourished by the blood of dueling telecom experts.

Telecom is to the public sphere today what the printing press was to that of the 16th century. It needs access to spectrum, and to technology that enables the use of that spectrum. To free the means of communication, the state has to legalise unlicensed (or minimally licensed) access to the spectrum and thus create a potential market, which free enterprise can then fill. Optimising the public good is not that simple, however: deregulating spectrum so as to create a market for WiFi for personal use – devices that can coexist with each other – is easier than incentivising the creation of efficient, unlicensed networks made up of devices that cooperate. In other words, people would be willing to buy a wireless router, which they can use inside their apartment, because they are paying for exactly what they would be using. It is less likely that they would fork out the extra cash for networking hardware that allows their router to cooperate with other similar network-intelligent devices and form a mesh – because our homo economicus has no guarantee that his neighbours would also buy that device and make the extra money worthwhile. This is a concept, which falls squarely into the so called “tragedy of the commons”10Cp. Durga P. Satapathy and Jon M. Peha, “Spectrum Sharing without Licences: Opportunities and Dangers”, Proceedings of The Telecommunications Policy Research Conference (TPRC), 1996, pp. 15–29, p. 16. except that instead of exhausting a common resource, the tragedy here is that for a common resource – the mesh – to be created, everyone has to cooperate. In other words, there is a positive externality, which increases with the size of the network. If the big manufacturers do not bet on this happening, there is a case for state-funded research. But has there really been a failure of the market?

Whitespace

When television went from terrestrial analogue to satellite and digital, the digital dividend freed up spectrum that either used to be kept as ‘guard bands’ around the wavelengths used for the old TV channels or were occupied by the TV channels themselves. Some broadcast frequencies might also be limited in their coverage to certain geographical areas, thereby leaving large swathes of countryside with a free band in those wavelengths. These unused swathes of spectrum are called whitespaces. Governments have stepped in and there are a number of studies being carried out on the feasibility of using these wavelengths.

But a number of companies, including the likes of Google, Microsoft and Intel, have come together to form organisations dedicated to exploring the use of whitespace for rural internet access – like the White Spaces Coalition, the Wireless Innovation Alliance and the Whitespace Alliance, the last of which has adopted IEEE’s 802.22 wireless standard, calling it Wi-FAR – essentially WiFi that operates at lower frequencies (below 700 MHz) to cover much larger areas with a maximum radius of around 60 kilometres. That is more than 10.000 square kilometres per antenna.

While industry should be allowed to produce, deploy and operate to the best of its ability, legitimate technological concerns should not be gamed to create egregious constraints on citizens’ rights. This will be hard because there are strong incentives for governments to collude, both because of the protection money they stand to gain through spectrum auctions and the ease with which they can coerce the centralised human and network resources of traditional telecom companies to surveil their customers. But if the choice is between old ways of doing business and the greatly increased ease, efficiency and freedom of unlicensed networks, then nothing is really too big to fail – binding the market in a regulatory straitjacket to suit old business models would mean passing on an incredible opportunity to broaden and strengthen the public sphere. Right now, whitespace technology is being trialled by IIT-Bombay in India in partnership with the Whitespace Alliance. The example of TV whitespace – where governments and corporations are working separately towards the common goal of creating a standard for communications, which is going to be far less restrictive than the old ones – gives the author cause for cautious optimism.

References
1 By which one could mean either the actual code or the protocols: cp. Lawrence Lessig, Code: Version 2.0, New York NY, Basic Books, 2006.
2 CG stands for Central Gondwanaland, the vast swathe of mostly forested central India with few or no towns and little in the way of civic infrastructure.
3 And big-government advocates should note that none of this would have been possible without concentration of capital in corporations creating a market for International Business Machines’ first mainframes, and the decrease in the production cost of computing technology due to research conducted by them and other manufacturers led to the ‘invention’ of the Personal Computer, a computing device that wealthy individuals could finally afford. It took a decade or two before computers became cheap enough for people to suggest one laptop per child as a societal duty. This has been the trend for technology as a whole, including the data- and wireless- capable devices we are looking at.
4 GSM is the abbreviation for Global Systems for Mobile Communications and basically means a computer joined at the hip with a high-power, high resolution, two-way radio.
5 Back when GSM technology came in. It still is the dominant technology for voice calls because of a combination of sunk costs by consumers as well as industry, and perhaps licensing issues.
6 Cp. Phil Goldstein, “Deloitte: Two-Thirds of U.S. Consumers Prefer Wi-Fi over Cellular“, FierceWireless, November 26, 2013.
7 For an example of a database in operation, see this this TV Whitespace Database.
8 Cp. Jon M. Peha and Sooksan Panichpapiboon, “Real-Time Secondary Markets for Spectrum”, Telecommunications Policy, 28, 2004, pp. 603–618.
9 Like the New America Foundation in the US; sadly not so much in India, because the Indian government has been rather opaque about its decisions until now, though that is changing.
10 Cp. Durga P. Satapathy and Jon M. Peha, “Spectrum Sharing without Licences: Opportunities and Dangers”, Proceedings of The Telecommunications Policy Research Conference (TPRC), 1996, pp. 15–29, p. 16.

Beli did time in grad school do­ing phy­sics and worked at the Cen­ter for In­ter­net and So­cie­ty in Ban­ga­lo­re.