- Copyright 2016
The stakes in the Aadhaar case are huge, given the central government’s ambitions to export the underlying technology to other countries. Russia, Morocco, Algeria, Tunisia, Malaysia, Philippines, and Thailand have expressed interest in implementing biometric identification system inspired by Aadhaar. The Sri Lankan government has already made plans to introduce a biometric digital identity for citizens to access services, despite stiff oppositionto the proposal, and similar plans are under consideration in Pakistan, Nepal and Singapore. The outcome of this hearing will impact the acceptance and adoption of biometric identity across the world.
At home in India, the need for biometric identity is staked on claims that it will improve government savings through efficient, targeted delivery of welfare. But in the years since its implementation, there is little evidence to back the government’s savings claims. A widely-quoted World Bank’s estimate of $11 billion annual savings (or potential savings) due to Aadhaar has been challenged by economists.
The architects of Aadhaar also invoke inclusion to justify the need for creating a centralized identity scheme. Yet, contrary to government claims, there is growing evidence of denial of services for lack of Aadhaar card, authentication failures that have led to death, starvation, denial of medical services and hospitalization, and denial of public utilities such as pensions, rations, and cooking gas. During last week’s hearings , Aadhaar’s governing institution, the Unique Identity Authority of India (UIDAI), was forced to clarify that access to entitlements would be maintained until an adequate mechanism for authentication of identity was in place, issuing a statement that “no essential service or benefit should be denied to a genuine beneficiary for the want of Aadhaar.”
Centralized Decision-Making Compromises Aadhaar’s Security
The UIDAI was established in 2009 by executive action as the sole decision-making authority for the allocation of resources, and contracting institutional arrangements for Aadhaar numbers. With no external or parliamentary oversight over its decision-making, UIDAI engaged in an opaque process of private contracting with foreign biometric service providers to provide technical support for the scheme. The government later passed the Aadhaar Act in 2016 to legitimize UIDAI’s powers, but used a special maneuver that enabled it to bypass the House of Parliament, where the government lacked a majority, and prevented its examination by the Parliamentary Standing Committee. The manner in which Aadhaar Act was passed further weakens the democratic legitimacy of the Aadhaar scheme as a whole.
The lack of accountability emanating from UIDAI’s centralized decision-making is evident in the rushed proof of the concept trial of the project. Security researchers have noted that the trial sampled data from just 20,000 people and nothing in the UIDAI’s report confirms that each electronic identity on the Central ID Repository (CIDR) is unique or that de-duplication could ever be achieved. As mounting evidence confirms, the decision to create the CIDR was based on an assumption that biometrics cannot be faked, and that even if they were, it would be caught during deduplication.
It emerged during the Aadhaar hearings that UIDAI has neither access to, nor control of the source code of the software used for Aadhaar CIDR. This means that to date there has been no independent audit of the software that could identify data-mining backdoors or security flaws. The Indian public has also become concerned about the practices of the foreign companies embedded in the Aadhaar system. One of three contractors to UIDAI who were provided full access to classified biometric data stored in the Aadhaar database and permitted to “collect, use, transfer, store and process the data” was US-based L-1 Identity Solutions. The company has since been acquired by a French company, Safran Technologies, which has been accused of hiding the provenance of code bought from a Russian firm to boost software performance of US law enforcement computers. The company is also facing a whistleblower lawsuit alleging it fraudulently took more than $1 billion from US law enforcement agencies.
Compromised Enrollment Scheme
The UIDAI also outsourced the responsibility for enrolling Indians in the Aadhaar system. State government bodies and large private organizations were selected to act as registrars, who, in turn, appointed enrollment agencies, including private contractors, to set up and operate mobile, temporary or permanent enrollment centers. UIDAI created an incentive based model for successful enrollment, whereby registrars would earn Rs 40-50 (about 75c) for every successful enrollment. Since compensation was tied to successful enrollment, the scheme created the incentive for operators to maximize their earning potential.
By delegating the collection of citizens’ biometrics to private contractors, UIDAI created the scope for the enrollment procedure to be compromised. Hacks to work around the software and hardware soon emerged, and have been employed in scams using cloned fingerprints to create fake enrollments. Corruption, bribery, and the creation of Aadhaar numbers with unverified, absent or false documents have also marred the rollout of the scheme. In 2016, on being detained and questioned, a Pakistani spy produced an Aadhaar card bearing his alias and fake address as proof of identity. The Aadhaar card had been obtained through the enrollment procedure by providing fake identification information.
An India Today investigation has revealed that the misuse of Aadhaar data is widespread, with agents willing to part with demographic records collected from Aadhaar applicants for Rs 2-5 (less than a cent). Another report from 2015 suggests that the enrollment client allows operators to use their fingerprints and Aadhaar number to access, update and print demographic details of people without their consent or biometric authentication.
More recently, an investigation by The Tribune exposed that complete access to the UIDAI database was available for Rs 500 (about $8). The reporter paid to gain access to the data including name, address, postal code, photo, phone number and email collected by UIDAI. For an additional Rs 300, the service provided access to software which allowed the printing of the Aadhaar card after entering the Aadhaar number of any individual. A young Bangalore-based engineer has been accused of developing an Android app “Aadhaar e-KYC”, downloaded over 50,000 times since its launch in January 2017. The software claimed to be able to access Aadhaar information without authorization.
In light of the unreliability of information in the Aadhaar database and systemic failure of the enrollment process, the biometric data collected before the enactment of the Aadhaar Act is an important issue before the Supreme Court. The petitioners have sought the destruction of all biometrics and personal information captured between 2009-2016 on the grounds that it was collected without informed consent and may have been compromised.
The original plans for authentication of a person holding an Aadhaar number under Section 2(c) of the Aadhaar Act, 2016 were meant to involve returning a “Yes” if the person’s biometric and demographic data matched those captured during the enrollment process, and “No” if it did not. But somewhere along the way, this policy changed, and in 2016, the UIDAI introduced a new mode of authentication, whereby on submitting biometric information against the Aadhaar number would result in their demographic information being returned.
This has created a range of public and private institutions using Aadhaar-based authentication for the provision of services. However authentication failures due to incorrect captured fingerprints, or a change in biometric details because of old age or wear and tear are increasingly common. The ability to do electronic authentication is also limited in India and therefore, printed copies of Aadhaar number and demographic details are considered as identification.
There are two main issues with this. First, as Aadhaar copies are just pieces of paper that can be easily faked, the use and acceptance of physical copies creates avenue for fraud. UIDAI could limit the use of physical copies: however doing so would deprive beneficiaries if authentication fails. Second, Aadhaar numbers are supposed to be secret: using physical copies encourage that number to be revealed and used publicly. For the UIDAI whose aim is speedy enrollment and provision of services despite authentication failure, there is no incentive to stop the use of printed Aadhaar numbers.
Data security has also been weakened because institutions using Aadhaar for authentication have not met the standards for processing and storing data. Last year, UIDAI had to get more than 200 Central and State government departments, including educational institutes, to remove lists of Aadhaar beneficiaries, along with their name, address, and Aadhaar numbers had been uploaded and available on their public websites.
Can Aadhaar be secured? Not without significant institutional reforms, no. Aadhaar does not have an independent threat-analyzing agency: securing biometric data that has been collected falls under the purview of UIDAI. The agency does not have a Chief Information Officer (CIO) and has no defined standard operating procedures for data leakages and security breaches. Demographic information linked to an Aadhaar number, made available to private parties during authentication, are already being collected and stored externally by those parties; the UIDAI has no legal power or regulatory mechanism to prevent this. The existence of parallel databases means that biometric and demographic information is increasingly scattered among government departments and private companies, many of whom have little conception of, or incentive to ensure data security.
Second order tasks of oversight and regulatory enforcement serve a critical function in creating accountability. Although UIDAI has issued legally-enforceable rules, there is no monitoring or enforcement agency, either within UIDAI or without, to see if these rules are being followed. For example, an audit of enrollment centers revealed that UIDAI had no way of knowing if operators were retaining biometrics nor for how long.
UIDAI has also neither adopted, nor encouraged reporting of software vulnerabilities or testing enrollment hardware. Reporting of security vulnerabilities provides learning opportunities and improves coordination; security researchers can fulfill the critical task of enabling institutions to identify failures, allowing incremental improvements to the system. But far from encouraging such security research, UIDAI has filed FIRs against researchers and reporters that uncovered flaws in the Aadhaar ecosystem.
As controversies over its ability to keep its data secure has grown, the agency has stuck to its aggressive stance, vehemently refuting any suggestion of the vulnerabilities in the Aadhaar apparatus. This attitude is perplexing given the number of data breaches and procedural gaps that are being uncovered every day. UIDAI is so confident of its security that it filed an affidavit before the Supreme Court in the Aadhaar case which claims that the data cannot be hacked or breached. UIDAI’s defiance of their own patchy record hardly provides much cause for confidence.
The Way Forward
The current Aadhaar regime is structured to radically centralize the implementation of Indian government and private digital authentication systems. But a credible national identity system cannot be created by an opaque, unaccountable centralized agency that chooses not to follow democratic procedures when creating its rules. It would have made more sense to confine UIDAI’s role to maintaining the legal structure that secures the individual right over their data, enforces contracts, ensures liability for data breaches, and performs dispute resolution. In that way, the jurisdictional authority of UIDAI would be limited to tasks where competition cannot be an organizing principle.
The present scheme has created a market of institutions that use Aadhaar for authentication of identity in the provision of services with varying degree of transparency and privacy. The central control of the scheme is too rigid in some ways, as the bureaucratic structure of Aadhaar does not facilitate adaptation to security threats, or allow vendors or private companies to improve data protection practices. Yet in other ways, it is not strong enough, given the security lapses that it has enabled by giving multiple parties free access to the Aadhaar database.
By making Aadhaar mandatory, UIDAI has taken away the right of individuals to exit these unsatisfactory arrangements. The coercive measures taken by the State to encourage the adoption of Aadhaar have introduced new risks to individuals’ data and national security. Even the efficiency argument has fallen flat, as it is negated by the unreliability of Aadhaar authentication. The tragedy of Aadhaar is that not only does it fail to generate efficiency and justice, but also introduces significant economic and social costs.
All in all, it’s hard to see how this mess can be fixed without scrapping the system and—perhaps—starting again from scratch. As drastic as that sounds, the current Supreme Court challenge may, ironically, provide a golden opportunity to revamp the fatally flawed existing institutional arrangements behind Aadhaar, and provide the Indian government with a fresh opportunity to learn from the mistakes that brought it to this point.
This commentary originally appeared in the Electronic Frontier Foundation&p[url]=http://www.digitalpolicy.org/securing-aadhaar/&&p[images]=http://www.digitalpolicy.org/wp-content/uploads/2018/03/Aadhaar-150x150.jpg" target="_blank">
By Jyoti Panday on 13 March, 2018
The Supreme Court of India has commenced final hearings in the long-standing challenge to India’s massive biometric identity apparatus, Aadhaar. Following last August’s ruling in the Puttaswamy case rejecting the Attorney General’s contention that privacy was not a fundamental right, a five-judge bench is now weighing in on the privacy concerns raised by the unsanctioned use of Aadhaar. The […]
In this push and pull between humans and technology, it is important to recognise the kind of roles that will disappear going forward and identify the ones that will arise. To stop the workforce from becoming obsolete, it is necessary to protect those in replaceable jobs by re-skilling and up-skilling them. To minimise the crisis, governments must work in tandem with companies and labour to further future technologies and provide training respectively.
This paper will detail the robot threat, the missed opportunities in the current system, as well as the opportunities policymakers are yet to seize. Discussion points will include strategies for making human workers more viable and prepared to deploy them alongside bots.
India’s labour force faces tough competition from robots. About 20–30 percent of employers in India anticipate a decrease in headcount[iii] due to automation taking over low-skill, monotonous jobs. At Infosys, for example, some 11,000 workers[iv] have already lost their jobs to automation, and 3,000 Wipro employees faced the same fate[v] after the company deployed Holmes, its AI project. These instances leave no doubt that IT industry jobs will downsize, losing 6.4 lakh jobs by 2021, according to HfS Research estimates.[vi]
And IT is not the only sector in jeopardy. Robotic Process Automation (RPA) has started replacing workers in financial services and helped reduce costs by more than 50 percent,[vii] because of improved accuracy and efficiency of data intensive, repetitive tasks. These virtual systems can work round-the-clock and produce a faster turnaround. Axis Bank, ICICI and HDFC are among those[viii] who have adopted the technology for traditional functions such as passbook updating, cash deposits, verification of know-your-customer details, salary uploads and even loan processing and sales of financial products. The Indian auto sector, too, has adopted automation rapidly. For instance, Maruti Suzuki’s factory in Manesar, Haryana has 7,000 workers and 1,100 robots. In agriculture, drones and robots[ix] will maximise yield and reduce damage to crops. High employment-generating sectors such as manufacturing, textiles and food processing services, too, are under threat.
Technology is now challenging human supremacy in many spheres by executing tasks faster and with smaller error margins. For instance, some AI can identify cancer more accurately than trained pathologists and sniff out fraudulent banking activity in milliseconds. Warehouse robots[x] employed by e-commerce majors save time and work with higher precision.
In addition to an increase in layoffs, mass hiring, too, has slowed down in tech firms as operations shift to smaller teams that can handle more sophisticated needs while robots take over simpler functions.
Of the million new workers that join the country’s workforce each month, less than 0.01 percent[xi] are able to find jobs. The National Skill Development Corporation (NSDC), sourced from an RTI, revealed that from the 8 lakh candidates trained in the year 2016 through non-scheme skilling programmes in 2016–17, less than half secured jobs.[xii] In 2015, too, only about 46 percent skilled students found employment.
In its guidelines, the government does note that “placement per se is not compulsory under these schemes.” However, the training service centres provide “placement assistance” in the form of networking opportunities, resumé dissemination and aid in setting up interviews. To incentivise private-partner training centres to better their placement rate, the NSDC offers monetary rewards such as loans, equity or grants,[xiii] in return for more success stories.
As a result of tightening skilling related laws, any centre with consistently subpar placement performance can lose its “star grades” and eventually be dropped from the PMKVY (Pradhan Mantri Kaushal Vikas Yojana). Monitoring of students, too, is being increased. Dipstick surveys of students every few months after graduation will be conducted to assess if they have managed to secure and hold down jobs.
Another deterrent in hiring is that even among those equipped with relevant hard skills, a soft skills deficit is pervasive. Over 36 percent of 42,000 Indian employers surveyed by multinational HR software and consulting services firm ManpowerGroup[xiv] cited a lack of people skills—communication, confidence, analytical thinking—as a reason for being unable to fill vacancies successfully.
The human skills gap, however, is not the only hole that needs plugging. India also has a long way to go in terms of implementing and using new-age technologies from an infrastructure point of view.
A recent survey by the research arm of Swiss investment firm UBS[xv] noted that India, which ranked 13th out of 18 nations, is not only behind the US and the UK when it comes to innovation on a per capita basis, it also “badly” lags other Asian countries. In absolute terms, India’s $50.3 billion R&D budget outstrips those of other nations, but as a proportion of GDP, it is a dismal 0.6 percent. Moreover, while the top talent in the country—mostly from the revered Indian Institute of Technology (IIT) campuses with an acceptance rate of under 1 percent[xvi]—are impressive, they’re a small share of total graduates. Many of the educational institutes in the country are subpar.[xvii] Given this sorry state, technology implementation and education need an overhaul.
Even though automation will be the kill switch for many traditional roles such as data entry and server maintenance, not all is lost.
According to market analysis firm HfS Research, those in medium- to high-skilled jobs stand to gain from the trend towards automation.[xviii] In the next five years, the number of medium-skilled IT employees is slated to rise to 1 million from 900,000. For high-skilled employees, there is a more marked difference as the talent pool jumps to over half a million from 320,000.
Source: HfS Research.
According to San Francisco- and Bengaluru-based online professional skilling platform Simplilearn’s August 2017 report, new technologies will create employment in digital domains such as big data, AI, the internet of things (IoT), cloud computing and cybersecurity.
The roles that will have the most vacancies in the next few years are cloud computing consultants for Microsoft Azure and Amazon Web Services. The demand for data scientists and AI architects follow.
These new-age jobs may be fewer in number compared to the hundreds of thousands of entry-level openings in the past, but the returns are favourable. “Say, if it’s eating away two jobs there the guys would earn INR 25,000, it’s creating on job where he will earn INR 50,000-60,0000, compelling people to become more productive,” Rituparna Chakraborty, co-founder and executive vice-president of HR TeamLease Services, told Quartz.[xix]
The country’s youth, too, is optimistic about the changing tides. Of 1,000 young professionals surveyed by Gurgaon-based education technology firm Talentedge, 83 percent of 21- to 24-year-olds were confident that automation will not render them obsolete. Across age groups, people either believed that they already had the skills to tackle automation or could up-skill and re-skill to do so.
Source: Talentedge India.
The need of the hour, therefore, is to re-skill workers. Corporations have already stepped up training within, but it is policy that can drive staggering changes on a state- or country-wide level, if done right.
The government’s biggest and most well-recognised effort to keep the workforce relevant is its National Skills Development Mission.[xx] In July 2015, Prime Minister Narendra Modi officially launched the initiative, designed to “create convergence across sectors and states in terms of skill training activities.” It not only consolidates and coordinates skilling efforts, but also expedites decision-making across sectors to achieve skilling at a large scale with added speed while maintaining standards.
In October 2016, the union cabinet on economic affairs approved two new skill-development programmes[xxi] with a combined funding of INR 6,655 crore. INR 4,455 crore will go to SANKALP (Skills Acquisition and Knowledge Awareness for Livelihood Promotion) and INR 2,200 crore to STRIVE (Skill Strengthening for Industrial Value Enhancement). Both the schemes, also backed by the government in some capacity, have been put in place to introduce institutional reforms and improve the quality and market relevance of skill development training programmes. The latest infusion of cash has been backed by three other pre-existing organisations: National Skill Development Agency, NSDC, Directorate General of Training.
For individuals who cannot afford the cost of training upfront but wish to skill up, the government has established a special skilling loan.[xxii] Any Indian national who has “secured admission in a course run by Industrial Training Institutes (ITIs), Polytechnics or in a school recognised by central or State education Boards or in a college affiliated to recognised university, training partners affiliated to National Skill Development Corporation (NSDC)/Sector Skill Councils, State Skill Mission, State Skill Corporation, preferably leading to a certificate/diploma /degree issued by such organisation as per National Skill Qualification Framework (NSQF) is eligible for a Skilling Loan.” The scheme provides loans of INR 5,000 to INR 1,50,000. It covers not only tuition but also other “reasonable expenditure,” such as exam fees, library charges, lab fees and purchase of books.
As of July 2017, there are already as many as 200 Pradhan Mantri Kaushal Kendras (PMKKs) in 28 states that provide skill-based training. More than 13,000 ITIs—governmental and private—supporting over 23 lakh students “are being modernised and upgraded to institutes of great standards and technical support,” Rajiv Pratap Rudy, Minister of State (Independent Charge) for Skill Development and Entrepreneurship, told YourStory.[xxiii]
Unfortunately, these policies together with the government’s flagship programme to train 400 million people by 2022,[xxiv] are not readily converting into jobs.
Among the policies due for an update is the 2011 national manufacturing policy (NMP), which allows for some tax and other benefits to fuel growth in the sector, so manufacturing can contribute to a quarter of India’s GDP by 2022. In May 2017, Defence Minister Nirmala Sitharaman, who then held the post of commerce and industry minister, said that this framework was being revamped to account for a host of government initiatives such as Make in India, Digital India and Skill India.
By 2020, the industrial automation industry is set to surpass INR 197 billion[xxv] in value, growing at a CAGR of 12 percent[xxvi] for the second half of this decade. The Indian government’s Make in India plans to boost local manufacturing is slated to play a part in the rise of automation as is a rise in factory automation. India-headquartered companies such as Siemens and Schneider stand to gain from the ongoing revolution.
Another silver lining for India is its lead in technologies such as the IoT. Connecting countless everyday devices to the web, IoT blurs the lines between the physical and the online world. The technology has diverse applications ranging from a smaller scale, such as personal belongings and homes, to entire smart cities and environmental monitoring.[xxvii] The Narendra Modi government has allocated INR 500 crores to support the rollout of 5G[xxviii] in the country by 2020, a crucial network upgrade to support the IoT.
Indian technology firms are already doing IoT-related business worth $1.52 billion, accounting for the majority—44 percent—of the $3.5 billion global IoT technology services outsourcing market, an August 2017 report by Bengaluru-based research, consulting and advisory firm Zinnov noted.[xxix]
Major Indian states are already capitalising on the opportunity. Last year, the Andhra Pradesh state government approved the first-of-its-kind IoT policy[xxx] to become an IoT hub by 2020. Officials in Karnataka committed $6.1 million[xxxi] for the construction of a state-of-the-art AI and data-science hub. The former wants to create 50,000 jobs in the industry, while the latter is aiming for 35,000 vacancies for its data scientists.
Over the last few years, India has eased up on foreign-investment constraints. The Industrial Revolution 4.0—moving from simple digitisation to innovation based on a combination of technologies[xxxii]—is digging its feet into the growing economy. “In this debate, the Industrial Revolution 4.0 is rapidly catching up. Whether you like it or not, some industries are bringing in robotics in a very big way. Some partly make use of it while others have not been impacted by this because they cannot afford it or they do not want it. But we have to have a place for all the three,” said Sitharaman.[xxxiii] In addition to outsourcing, policies are being put in place to create a robust IoT ecosystem within India.
“The Indian government’s plan of developing 100 smart cities in the country, for which INR 7,060 crore has been allocated in the current budget could lead to a massive and quick expansion of IoT in the country,” the Ministry of Electronics and Information Technology (MeitY) noted in its IoT policy.[xxxiv] The governmental body is aiming to create an IoT industry worth $1.5 billion by 2020. To achieve this rapid scaling up, the government has allocated INR 3 crore to Education and Research Network, India’s autonomous scientific society under MeitY. INR 1 crore each will also be given to 15 academic or institutional partners to fund the creation of resource centres and test beds.
AI, too, is in need of a supportive environment. A joint paper from 2017, by India’s IT industry body NASSCOM and London-headquartered consulting firm PricewaterhouseCoopers (PwC),[xxxv] stated that the government should use policy to promote AI in the country to benefit the three aforementioned schemes. As the West begins to establish laws about driverless cars and drones, India must catch up. “Instead of waiting for technology to reach a level where regulatory intervention becomes necessary, India could be a frontrunner by establishing a legal infrastructure in advance,” NASSCOM and PwC noted. “Alternatively, early public-sector interest in AI could trigger a spurt of activity in the AI field in India.” The two entities further added that government backing could help incubate AI-based research and training, e.g. making effective training data sets from public portals and allowing access to open-source software libraries, toolkits and development tools with low-cost code repositories and development languages.
The government is finally giving the IT industry its due attention. It is setting up an expert committee[xxxvi] to advise the IT ministry on issues pertaining to workforce, privacy, security and more. In addition to encouraging budding industries, governments should strive to make technology a way of life for current students. Below are some of the recommendations[xxxvii] the All India Management Association (the national apex body of the management profession in India), along with the Boston Consulting Group, has made to the central and state governments across various fronts to improve higher education, technical education and the field of IT overall:
India has started considering the next onslaught of technologies and shows readiness to prepare for it. Yet, most of the government backing and policies are under construction. From waiting on experts to comment on AI to investigating the rollout of 5G, India’s tryst with new technologies is still in nascent stages.
The country should try to implement the policies before the technologies come in. Robust frameworks could lead the way in terms of establishing the use of the tech not only from a government perspective but also from the perspective of companies and consumers. This is especially important because much of the emerging technologies, from big data analytics to artificial intelligence, rely on massive data banks to hone their systems. The government’s role could be to offer access to large public databases and lay down the rules for proper, legal data integrity and use.
In addition to shaping policies, the government should look at providing incentives, just as it provides loans and grants to training institutes that churn out employable graduates and offers tax benefits under the NMP. For instance, the government is capable of calculating the optimal deployment and use of 5G, but—given that massive debts[xxxviii] have piled up in India’s $50-billion telecom industry, and in the last 4G spectrum, 60 percent of the block[xxxix] went unsold—there’s little proof that an actual transition to the network is on the cards.
After addressing core infrastructure issues, the government must also assess the state of the labour force. Devashish Mitra, a professor of Economics at Syracuse University, explains the cumbersome nature of the existing laws in a World Bank blog post:
“There are 200 labour laws in India, including 52 Central Acts.[xl] Probably, the three most restrictive acts are the Industrial Disputes Act (IDA), the Industrial Employment (Standing Orders) Act and the Trade Union Act. The IDA requires firms with more than 100 workers to seek permission from their respective state governments for retrenchment or laying off of workers. This is seldom granted. The Industrial Employment (Standing Orders) Act requires such firms to ask for permission even for modifications in job descriptions. The Trade Union Act lets any seven workers within a firm form a union, which leads to multiple labour unions. In addition, this act provides each such union the right to strike and to represent workers in legal disputes with employers.”
As the poor recruitment numbers show, merely skilling is not enough. With 15 million freelancers,[xli] second only to the US, it is imperative that governments ease up labour laws to help companies hire and let go off talent as and when needed. This is particularly important at a time when techies equipped with new-age skills are looking to become independent workers collaborating with multiple companies instead of committing to one as a full-time employee.
Overall, the rise of robots has prompted India to take a hard look at the future of its humans, and the policies need to catch up faster.
[i] Katie Allen, “Technology Has Created More Jobs than It Has Destroyed, Says 140 Years of Data,” The Guardian, 18 August 2015, www.theguardian.com/business/2015/aug/17/technology-created-more-jobs-than-destroyed-140-years-data-census.
[ii] Venkatesh Ganesh, “Automation to Kill 70% of IT Jobs,” The Hindu Business Line, 14 November 2017, www.thehindubusinessline.com/info-tech/automation-to-kill-70-of-it-jobs/article9960555.ece.
[iii] ManpowerGroup, “The Skills Revolution: Digitization and why skills and talent matter,” 2017, http://www.manpowergroup.com/wps/wcm/connect/5943478f-69d4-4512-83d8-36bfa6308f1b/MG_Skills_Revolution_FINAL.pdf?MOD=AJPERES&CACHEID=5943478f-69d4-4512-83d8-36bfa6308f1b.
[iv] Sanjana Ray, “Infosys Says It Has Released 11,000 Jobs Due to Automation,” YourStory.com, 26 June 2017, yourstory.com/2017/06/infosys-job-releases-automation/.
[v] Kunal Anand, “3000 Engineers Freed Up At Wipro After Artificial Intelligence Learns To Do Their Work!” Indiatimes, 10 June 2016, www.indiatimes.com/news/world/3000-software-engineers-might-get-fired-at-wipro-after-artificial-intelligence-learns-to-do-all-their-jobs-256519.html.
[vi] “News,” HfS Research, 15 May 2017, www.hfsresearch.com/news/it-sector-lose-64-lakh-low-skilled-jobs-automation-2021-hfs-research.
[vii] “Robotic Process Automation for Financial Services,” Capgemini Worldwide, www.capgemini.com/service/robotic-process-automation-for-financial-services/.
[viii] Sonali Shukla and Joel Rebello, “Threat of Automation: Robotics and Artificial Intelligence to Reduce Job Opportunities at Top Banks,” The Economic Times, 3 May 2017, economictimes.indiatimes.com/industry/banking/finance/threat-of-automation-robotics-and-artificial-intelligence-to-reduce-job-opportunities-at-top-banks/articleshow/58485250.cms.
[ix] Aakash Sinha, “India in 10 Years: The Bots Are Coming, and It’s Good News,” LiveMint, 3 February 2017, www.livemint.com/Industry/FM4igHc8o3grCOYfbcBNsN/India-in-10-years-The-bots-are-coming-and-its-good-news-f.html.
[x] Richa Bhatia, “India’s Warehouse Automation Is at Its Peak as Industries Gear up to Meet Fulfilment Demand,” Analytics India Magazine, 24 October 2017, analyticsindiamag.com/indias-warehouse-automation-peak-industries-gear-meet-fulfilment-demand/.
[xi] Harsh Mander, “Job Creation in High-Growth India Should Be a Top Priority,” Hindustan Times, 23 May 2017, www.hindustantimes.com/columns/job-creation-in-high-growth-india-should-be-a-top-priority/story-I7jZney9nzFVyhAvehOcDO.html.
[xii] Surabhi/Aditi Nigam, “Just about Half the Candidates Who Get Skilled Land Jobs,” The Hindu Business Line, 9 March 2017, www.thehindubusinessline.com/economy/policy/just-about-half-the-candidates-who-get-skilled-land-jobs/article9578024.ece.
[xiii] Ministry of Skill Development and Entrepreneurship, www.skilldevelopment.gov.in/training-providers.html.
[xiv] Rupali Mukherjee, “Indian Employers Suffer Talent Shortage in Accounting, Finance and IT: Survey – Times of India,” The Times of India, 18 October 2016, timesofindia.indiatimes.com/city/mumbai/Indian-employers-suffer-talent-shortage-in-accounting-finance-and-IT-Survey/articleshow/54917715.cms.
[xvi] Shelly Walia and Manu Balachandran, “These Ten Guys Aced the IIT Entrance Exam. Here’s What They’re Doing after Graduation,” Quartz, 18 March 2015, qz.com/363388/these-ten-guys-aced-the-iit-entrance-exam-heres-what-theyre-doing-after-graduation/.
[xvii] “Poor Show in World Rankings: Government Has a Mega Plan to Boost Higher Education,” The Economic Times, economictimes.indiatimes.com/industry/services/education/poor-show-in-world-rankings-government-has-a-mega-plan-to-boost-higher-education/articleshow/60437400.cms.
[xviii] “Impact of Automation and AI on Services Jobs, 2016-2022,” HfS Research, 4 September 2017, www.hfsresearch.com/market-analyses/impact-of-automation-and-ai-on-services-jobs-2016-2022.
[xix] Ananya Bhattacharya, “Technology Will Continue to Kill IT Jobs-but There’s Still Hope for Indian Engineers,” Quartz, 9 October 2017, qz.com/1096213/technology-will-continue-to-kill-it-jobs-but-theres-still-hope-for-indian-engineers/?utm_source=qzfb.
[xx] Ministry of Skill Development and Entrepreneurship, www.skilldevelopment.gov.in/nationalskillmission.html.
[xxi] “Rs 6,655 Crore Schemes to Boost Skill India Mission Get Cabinet Nod,” The Economic Times, https://economictimes.indiatimes.com/news/politics-and-nation/cabinet-nod-to-major-skill-development-schemes/articleshow/61042184.cms.
[xxii] “Skill Development Loan,” Skill Development Loan – Get Loans To Upgrade Your Skills : Get the Best Interest Rates on Deposits from the Best Indian Bank for NRI Deposits, www.tmb.in/retail_skill_loan/get_loans_to_upgrade_your_skills.html.
[xxiii] Sneh Singh, “Exclusive: Rajiv Pratap Rudy on How Skill India Mission Aims to Make Us the Skill Capital of the World,” YourStory.com, 19 July 2017, yourstory.com/2017/07/exclusive-rajiv-pratap-rudy-skill-india-mission-aims-make-us-skill-capital-world/.
[xxiv] Various Reporters, “India Launches Mission to Skill 400 Million by 2022,” Business Standard, 16 July 2015, www.business-standard.com/article/economy-policy/india-launches-mission-to-skill-400-million-by-2022-115071600035_1.html.
[xxv] “India Industrial Automation Industry Is Expected to Reach INR 197 Billion by 2020 with Growth Driven by Rapid Adoption of Modern Technology Backed by Cost Saving Features: Ken Research,” Ken Research.
[xxvi] “Global & Country Specific In-depth Market Analysis,” India Factory Automation Market by Type 2020, TechSci Research, September 2015, www.techsciresearch.com/report/india-factory-automation-market-forecast-and-opportunities-2020/448.html.
[xxvii] Aritra Sarkhel, “There’s an Internet of Things Blueprint for a Clean Ganga,” The Economic Times, 2017, https://economictimes.indiatimes.com/articleshow/60834336.cms; Kalyan Parbat, “Telecom Sector Reels under Heavy Debt and Falling Revenue,” The Economic Times, 17 June 2017, economictimes.indiatimes.com/news/company/corporate-trends/telecom-sector-reels-under-heavy-debt-and-falling-revenue/articleshow/59184371.cms.
[xxviii] Ananya Bhattacharya, “India’s Big Push for 5G Is Great on Paper but Too Ambitious in Practice,” Quartz, 24 October 2017, qz.com/1093049/indias-big-5g-push-great-on-paper-but-too-ambitious-in-practice/.
[xxix] “Zinnov Zones 2017 for IoT Technology Services,” Zinnov, 4 August 2017, zinnov.com/zinnov-zones-2017-for-iot-technology-services/.
[xxx] “Andhra Aims to Become India’s IoT Hub; Approves First-of-Its-Kind Policy,” YourStory.com, 3 March 2016, yourstory.com/2016/03/andhra-pradesh-iot/.
[xxxi] M. Devan, “Karnataka to Invest $6 Million to Set up AI and Data Science Hub,” The News Minute, 4 October 2017, www.thenewsminute.com/article/karnataka-invest-6-million-set-ai-and-data-science-hub-69439.
[xxxii] Klaus Schwab, World Economic Forum, 2016, https://www.weforum.org/Agenda/2016/01/the-Fourth-Industrial-Revolution-What-It-Means-and-How-to-Respond/.
[xxxiii] PTI, “Govt Modifying ‘2011 Vintage’ National Manufacturing Policy,” The Hindu Business Line, 18 May 2017, www.thehindubusinessline.com/news/national/government-modifying-2011-vintage-national-manufacturing-policy/article9707050.ece.
[xxxiv] “IoT Policy Document,” Ministry of Electronics & Information Technology.
[xxxv] “Artificial Intelligence and Robotics – 2017 Leveraging Artificial Intelligence and Robotics for Sustainable Growth,” Assocham India and PwC, March 2017.
[xxxvi] P. Suchetana Ray and Sarika Malhotra, “Govt Sets up Expert Group for Suggestions on Artificial Intelligence Policy,” Hindustan Times, 20 October 2017, www.hindustantimes.com/india-news/govt-sets-up-expert-group-for-suggestions-on-artificial-intelligence-policy/story-R4VnrCufgm7xhh1fVlz9IL.html.
[xxxvii] “India’s New Opportunity—2020,” The Boston Consulting Group and Confederation of Indian Agency, IBEF.
[xxxviii] Kalyan Parbat, “Telecom Sector Reels under Heavy Debt and Falling Revenue,” The Economic Times, 17 June 2017, economictimes.indiatimes.com/news/company/corporate-trends/telecom-sector-reels-under-heavy-debt-and-falling-revenue/articleshow/59184371.cms.
[xxxix] Rishi Iyengar, “Why India’s 4G License Auction Was a Flop,” CNNMoney, 7 October 2016, money.cnn.com/2016/10/07/technology/india-telecoms-4g-auction/index.html.
[xl] Jagdish Bhagwati and Arvind Panagariya, “Why Growth Matters: How Economic Growth in India Reduced Poverty and the Lessons for Other Developing Countries,” 2013.
[xli] “India,” 2018 Marketplace Expansion Index, 2017, expansion.hyperwallet.com/india/.
Abstract This paper explores the changing tides within India’s labour force with the rise of automation, owing to the dawn of advanced technologies such as artificial intelligence and the internet of things. The essay evaluates the impact on the labour force as well as laws and policies put in place to protect the labour issues […]
Setting aside the salacious, airport-thriller novel aspects, this convoluted mess worried national security officials because of the potential it posed for people to be blackmailed or hacked, causing significant damage if some adversarial group discovered it first. Security experts flagged multiple failures in precautions that had led to the discovery of the affair. It helped that there was so much stored personal communication data for the FBI to analyse. However, the same data had made the director—and others involved—vulnerable, and had ultimately jeopardised national security. In today’s data-rich and data-hungry world, the best protection against such threats is to limit the amount of data that service providers and governments collect.
The New Oil
When it comes to data, conventional wisdom says that “more is better”. A recent web search by this author for the phrase “data is the new oil” returned over 200 new posts created in the span of one week. The implication is that data can be processed to extract value to power the economy. However, it differs from oil in one respect: data is not a limited resource. Each day, more data is created, and it is as easy to access as crude oil was when it first became useful, just bubbling up to the surface and ready for the taking.
Much of the new data produced, or collected, each day is personal information that pertains to people’s daily lives, such as what they do, where they go, what they buy and with whom they communicate. People engender massive amounts of data in the online world. When a person visits a website, the internet service provider (ISP) may—and in some places, is legally obligated to—log the time and the site’s server address. If the websites enforce HTTPS, then the ISP can also capture all the data that is exchanged. The server hosting the site, too, often logs the IP address of visitors, the time, the pages viewed and the actions taken. If it is a webmail site, the email service provider can also log when a user signed in and from what IP address, and who sent or received the email; it too stores the contents. Many news and information sites employ tools to log how long a person spends on a page, how far they scroll, what else they click on. If the site uses a third-party analytics tool like Google Analytics, then that’s one more entity that captures the web users’ information. Many sites use more than one analytics provider, which increases the number of entities collecting data. If the site makes its revenue from advertising, then it probably hosts a service that serves those ads and also records who views them. Social networks integrated into the site for sharing links may also collect data at a personal level. All in all, anywhere from a handful to a couple of dozen entities can collect and store information about a person’s visit to just one website. One writer counted over a hundred trackers in one and a half days of Web use.
When a person browses the web, there could also be one or more external parties snooping on this activity at any point from the personal device to the remote server. It could be law enforcement or intelligence agencies, foreign governments or criminals and hackers. This further increases the number of people who collect information.
Even offline activities create a data trail, because nowadays, at some point those activities are digitised. Visit a doctor or a hospital and a bill is generated, which creates a record in the office database. Pay by anything other than cash, like a credit card or even a cheque, and a record is created with the bank. The bill incurred may in turn create a record in an insurance company’s database. Drive past a license plate scanner on the way home and a record is created with the motor bureau. The mobile network operator may be collecting location data of your mobile phone. Turning on location services or using a navigation app produces another set of personal data.
This personal data can provide a detailed and intimate profile of individuals, which can lead to abuse of privacy and be exploited to harm them socially, financially or physically. Governments, businesses and civil societies are, therefore, grappling with the issue of data protection. Their objective is to find a balance between the legitimate interests of data collectors, and the privacy and security interests of individuals.
Drill, Baby, Drill
There are three groups of entities that are interested in collecting personal information. The first consists of governments that have lawful jurisdiction over the person, such as a city, county/district, state and national governments. The second group consists of entities that a person transacts with, such as banks, e-commerce sites, libraries and other government services. The third group consists of those who want information to defraud, defame or harm either the person, people connected to them, or their employers or governments.
Governments have significantly ramped up data collection for several reasons that they believe are legitimate interests. These reasons include better delivery of services, responding to demand, preventing terrorist acts, thwarting criminals and catching tax evaders. Most of the increase in government data collection has been in response to terrorism. The Snowden leaks confirmed what many had suspected about the vast breadth and scope of communications data that the US National Security Agency (NSA) collects. Other governments also collect communications data on their own citizens, including the UK, Australia and Germany. India has had a plan in the works for several years to connect multiple government intelligence, law enforcement and citizen-centric databases.
Since communications and internet services store the information, governments can access or request metadata about communications, i.e., the information on who is communicating with whom, and the times and locations of the communications. If the contents of the communications, too, are stored, e.g., mails or instant messages, governments can access that as well. Many governments also want to access encrypted communications, and to do that, they are requesting universal access keys or “backdoors” to encrypted communications.
In the second group, businesses claim a legitimate interest in collecting personal data to better serve existing customers and to sell to new customers. Netflix is known for mining the viewing habits of its subscribers to provide video recommendations. Amazon does the same for Audible subscribers, in addition to offering product recommendations based on searches conducted on its e-commerce platform. Businesses have been dazzled by the allure of using data and the internet to specifically target potential new customers. Their hope is that by acquiring enough information on individuals, they can identify those most likely to consume their products and craft a message directly for them. This technique, known as microtargeting, has been used in the US presidential elections. The advertising industry has also hyped up the potential of repeatedly targeting individuals with product messages based on their internet activities. It goes something like this: A person reads an article about an idyllic vacation spot on a news or travel website. Later, they see an ad on a different site for a vacation package to that spot, and then they see the ad again on another site. This “retargeting” is supposed to prompt the person to buy the package and off they go.
However, business data collection goes beyond individual companies looking at their own customers. Companies called “data brokers” aggregate personal information from different sources to create comprehensive profiles of individuals. They then sell this information to interested parties. This type of business data collection has reached a scale that is difficult to grasp. A report by Cracked Labs says that large data brokers such as Acxiom and Oracle collect thousands of data points on millions of users globally, and can provide information such as age, gender, income, religion, sexual orientation, vehicles and property owned, number and age of children, hobbies, political affiliation, whether seeking an abortion, travel history, social media usage and much more.
In 2009, the legal director for the Electronic Frontier Foundation (EFF), Cindy Cohn, called the commerce of personal information the “surveillance business model.” This brilliantly apt term is a reminder that governments, too, can purchase private information from data brokers. This includes foreign governments, which may not have the means to set up their own data-collection system in the host countries.
The third group that seeks access to personal information includes scam artists, cyber criminals, cyber-espionage groups and adversarial governments. They each have different motivations, and depending on their goal, they may use personal data differently.
Financial scammers can use personal information to dupe people into giving their money to supposedly trusted entities. In 2016, Indian authorities broke up several call-centre operations that duped US citizens out of millions of dollars. The callers from India pretended to be Internal Revenue Service (IRS) agents and threatened the victims with huge fines and arrest if they did not immediately make a payment, supposedly to the IRS but using unusual methods, such as iTunes gift cards. In one news report, a treasury department official said that the callers “have information that only the Internal Revenue Service would know.”
The Equifax data breach that came to light in 2017 exposes financial and personal details such as bank accounts, and employment and address history of millions of residents of the US and the UK.
Government databases are also prime targets for cyber-espionage groups and other governments. The US government disclosed that a Chinese group had collected personal information of nearly 20 million people who had applied for a government security clearance, and almost 2 million relatives of such applicants, by breaking into a system at the Office of Personnel Management (OPM). The hackers could either be non-state actors or a government-linked group.
If an entity combined the OPM and Equifax data, they could learn which security clearance holders had financial difficulties. Such information would be immensely valuable to foreign governments. Since one of the main justifications that governments use for mass data collection is protecting the citizens, it makes sense to also consider the potential for harm that this data creates.
The Harms of Data
The concern with government mass data collection and backdoors is that if one entity wants to use the data for beneficial reasons, many others want the data for malicious purposes. Suppose a government succeeded in convincing a messaging app to give it the key to unlock all encrypted communications. Even if this government manages to keep that key secret against all external parties, the app creator could come under tremendous pressure from other governments seeking the same access. If the company wants to do business in those countries, it will have to give those governments access as well, if so asked. Now, rather than just one government protecting one key, there may be dozens of governments protecting that one key, or if they each have their own key, protecting dozens of keys. In either case, as the number of people with access increases, so does the probability of the key leaking to the wrong people. Creating a system that permits such access may also make it vulnerable to cryptographic attacks.
Democratically elected representative governments conduct mass data collection because not enough citizens object to it. Many people say they are not bothered by the invasion of their privacy because they have nothing to hide. This attitude misses the point. They have nothing to hide at the moment, because they either have not engaged in an activity they want to hide or the authorities have not made their habitual activities into an offense. Moreover, while some people may not have anything to hide, others might, and their secrets make them vulnerable to blackmail or coercion. If the person being blackmailed has a sensitive role in protecting national security, critical infrastructure, financial systems or other valuable targets, then the entire population, including those who have no secrets to hide, is at risk of harm.
Data collection leads to data processing. When the government has millions or billions of records, making sense of the records requires additional processing. Experts have pointed out that such systems generate thousands of false leads. Artificial intelligence may reduce the number of false leads, but the “reasoning” behind AI results in a black box, so agents may end up on a suspect’s doorstep with no idea why the person is a suspect. Mass data collection can thus erode freedom and undermine people’s trust in democratic governments. Even hardened cynics need to consider the long-term effects and not dismiss these concerns lightly.
Given the potential for harm, it is worth exploring how much benefit mass data collection provides. Despite the billions of records that the US government has collected, it has failed to show that personal data helps prevent acts of terrorism, according to EFF. By the FBI’s own narrative, one wannabe terrorist who was arrested in 2016 came to its attention because of a concerned community member who was an FBI source. The FBI then built its case through conventional police work, not through data mining.
Similarly, after terror acts, the government requests for “backdoors” to messaging systems are not based on evidence that such access can prevent terror acts. Backdoor access will, therefore, only weaken a secure system that hundreds of millions of people use for legitimate, lawful purposes. This is one reason that over 200 technology industry leaders and privacy researchers have written a joint letter stating that backdoors will make everyone less safe.
The collection of personal information by businesses poses many of the same risks as government surveillance does. To begin with, while every business probably collects some data, there are a few businesses that collect a huge amount of personal data, and that’s excluding the data brokers. First, there is Amazon, which lists at least 30 different businesses that it operates on the web. Then there are Facebook and Google, who dominate online advertising. These companies build detailed user profiles that they then use to provide tailored content and ads.
These profiles can be used to target individuals and lure them into fraudulent situations. In 2013, a Facebook user named Adam decided to explore the extent of individual targeting possible. He set up an ad campaign on the site with the goal of getting a friend of his to click on the ad, using the friend’s demographic information, such his university, past employer and favourite video game. Within half a day of launching the campaign, the friend clicked on the ad. The experiment cost Adam nothing since the ad only received two clicks in total. Armed with a database of user profiles, malicious actors could use this technique to lure high-value targets to defraud them or to install malware in their systems. Since such ad campaigns can be created using algorithmic means, they can target thousands of people easily and at a very low cost. During the 2016 US election campaign, several scammers targeted Republicans and supporters of Donald Trump with ads to get them to donate to fake political groups. 
Big data in the service of commerce can also be used to subvert individual privacy and rights in potentially harmful ways. In March 2017, journalist Ashley Feinberg discovered ex-FBI director James Comey’s undisclosed and pseudonymous Instagram account by searching for his son’s not-so-anonymous account. Once she was following the son’s account, Instagram suggested other accounts for her to follow, one of which turned out to be Comey’s. Such suggestions are part of most social networking sites, but as the algorithms meant to make the suggestions more useful get more complicated, nobody can say how they work.
Although these examples had fairly benign outcomes, in other cases, they could cause grave harm. Victims of domestic violence or stalking could inadvertently be exposed. So could political refugees and dissidents. In countries that outlaw homosexuality, data could be used to find and persecute closeted individuals.
While some people may find it appealing to receive individually targeted promotional messages based on their actions, others may find it intrusive. The bigger question is whether it works. There does not appear to be much hard data on this. One paper published by MIT researchers in 2011 found that retargeting had mixed results. It works less well than general advertising when people do not have intent to purchase, and works better when they are actively shopping.
The third group that wants to get personal information can claim no legitimate interest to it. Therefore, to get data, hackers and criminals break into systems or gather data that’s not secured. This article has touched upon how personal information in the wrong hands can be harmful, but it is worth looking at in more detail. The risks are financial fraud attacks against individuals, financial theft against centralised systems, and physical threats against individuals and national security threats against governments.
In the US, when a person calls their bank or credit card company, it is highly likely that to establish their identity, the financial institution will ask them some questions based on their personal history. One bank makes it a point to inform the customer that this information is coming from public sources. The questions range over areas such as previous employers, past addresses or phone numbers, mortgage history and relatives who may have shared those addresses, phone numbers or accounts. Though this will deter a spur-of-the-moment criminal, a more determined effort could unearth most of those details from public records. Many people make their career history available on LinkedIn and family connections on Facebook. Mortgage records are public filings, which are increasingly searchable online. Thus, these questions provide no real security, especially not in a post-Equifax breach world, where these details are out in the wild.
In the wrong hands, the personal information that brands, social networks and internet data brokers collect can be used against individuals by targeting them in the same way Adam targeted his friend. The goal may perhaps not be to harm the individual, but the individual’s employer. It may have taken only one employee for HBO’s systems to be breached; an NSA contractor unwittingly exposed the agency’s hacking tools to the Russian government. The other risk arises from one of the side effects of large data collection, which is that no one can predict how that data will be used or what results the data analysis may produce. This can lead to people being exposed to unpredictable risks.
Members of the UK government think that backdoors will help them against criminals and terrorists, but this is a short-sighted view, since terrorists can use backdoors as well, putting even more people at risk. If terrorists can intercept communications of government officials, high-profile businesspersons and their families, they can know when and where their targets are most vulnerable.
Turn Data to Wind and Solar
If data is the new oil, then just as the world learned to curb its consumption of oil for the sake of self-preservation, it now needs to curb its collection of data. This means that while data may still be collected and used, the attitude towards it needs to change. Instead of oil, data should be treated like solar or wind energy. It should be used immediately, or stored for a short time, but then it should be gone.
At present, organisations seem to collect data for the sake of collecting it, even when it is unnecessary. For example, a local library in the UK requires online applicants to provide their gender, which has no obvious utility in letting people borrow books. The field is there because it costs nothing, and people provide the information without question.
Figure 1: Online Library Membership Application Requires the User to Provide Their Gender
Privacy advocates and rights groups have been raising the alarm over the unbounded collection of personal information for decades. In 1971, Arthur R. Miller, law professor at University of Michigan, testified before a Senate subcommittee:
“Few people seem to appreciate the fact that modern technology is capable of monitoring, centralising, and evaluating these electronic entries—no matter how numerous they may be—thereby making credible the fear that many Americans have of a womb-to-tomb dossier on each of us.”
Yet, many privacy advocates take the need to collect data as a given, and mostly recommend stronger protections in terms of limits on how governments and businesses may use the information.
Personal data has no natural scarcity, leading to unnecessary collection and abuse. To counter this, governments must impose scarcity by limiting what data may be collected, controlling how long it may be stored, and requiring data to be deleted. Going further, people need to adopt an attitude of abstention from collecting data, just as the world learned to become less dependent on oil.
The German term “datensparsamkeit” captures the idea of collecting as little data as necessary. It translates to English roughly as “data frugality” or “data abstention.” The EFF has advocated such an approach for many years. However, the amount of data collected has continued to grow. As AI and machine learning are increasingly put to data-mining use, and people talk of putting personal data into the blockchain, which are meant to retain data permanently, the need to limit collection becomes more imperative.
A library card system designed with data abstention in mind will not ask for gender data because it is not necessary. Law enforcement officials with data abstention in mind will not collect biometric information from everyone they arrest. Data-frugal governments will prohibit ISPs from collecting user access logs or require them to purge the logs after a brief period, say a week or one month. Counterterrorism officials with a data-frugality mindset may approach the problem by building community relations and developing informants rather than enormous systems that collect and fruitlessly analyse billions of records on millions of harmless citizens.
Businesses with a data-abstention mindset will purge their website access logs after a few days. E-commerce companies may delete purchase histories after some meaningful amount of time, such as the return period or after the warranty expires. Combining the concept of data abstention with user consent, data brokers should only be allowed to collect information from people who agree to it, and only the information they allow.
Of course, what constitutes necessary amounts and duration is open to interpretation. The onus should fall on the collector to periodically evaluate the information that is collected, and demonstrate to themselves and external review bodies that the data is essential and serves some useful purpose. If they must store personal data, data collectors should also find ways to first remove as much personal information as possible so that the data alone cannot be used identify individuals.
Current approaches of security hardening may protect systems for some time, but as the steady rate of breaches shows, systems and processes are not failsafe. That does not mean systems should not be secured. On the contrary, more efforts, investment and vigilance are needed. Fraud and intrusion detection will need to occur in real time.
Like offline criminals breaking into a building, cyber criminals are often working against time. They want to get into a system, get the data and get out quickly. Eliminating data troves while also increasing security measures reduces the incentive for criminals to break into systems. To collect data, they would need to break into a system and then establish a persistent connection. Such connections increase their chances of being discovered.
A data-frugal attitude does not mean turning one’s back on technology or even data use. Rather, it calls for a more rational and purpose-driven use of data. Imagine standing in front of an all-you-can-eat buffet every day. One may either consume large quantities of everything, or be selective, choose some items and leave others, count calories and eat sensibly. It may be fun to binge eat, but it is unhealthy and ultimately detrimental.
 Joe Sterling, “Is Petraeus pillow talk a security threat?” CNN, 14 November 2012, http://edition.cnn.com/2012/11/13/us/petraeus-security-threat/index.html.
 Alexis C. Madrigal, I’m Being Followed: How Google—and 104 Other Companies—Are Tracking Me on the Web Feb 29, 2012 https://www.theatlantic.com/technology/archive/2012/02/im-being-followed-how-google-151-and-104-other-companies-151-are-tracking-me-on-the-web/253758/
 Mike Masnick, “The US Government Today Has More Data On The Average American Than The Stasi Did On East Germans,” TechDirt, 3 October 2012, https://www.techdirt.com/articles/20121003/10091120581/us-government-today-has-more-data-average-american-than-stasi-did-east-germans.shtml.
 Simon Phipps, “Should the government be allowed to collect data on UK citizens to prevent terrorism and criminal activity?” British Library, 28 January 2015, https://www.bl.uk/my-digital-rights/articles/mass-data-collection.
 Murray Hunter, “Australia’s Domestic Spying Revealed, Geopolitical Monitor,” Geopolitical Monitor, 11 November 2013, https://www.geopoliticalmonitor.com/australian-domestic-spying-revealed-4882/.
 Afshan Yasmeen, “Natgrid will deter terror: Shinde,” The Hindu, 15 December 2013,
 “How Much Meta Data Does Your Country Collect?” TorGuard, 8 March 2017,
 Emma Woollacott, “UK Home Secretary Demands WhatsApp Backdoor From People Who ‘Understand The Necessary Hashtags’,” Forbes, 26 March 2017, https://www.forbes.com/sites/emmawoollacott/2017/03/26/uk-home-secretary-demands-whatsapp-backdoor-from-people-who-understand-the-necessary-hashtags/#78adefca2df4.
 Tanzina Vega, “Online Data Helping Campaigns Customize Ads,” New York Times, 20 February 2012, http://www.nytimes.com/2012/02/21/us/politics/campaigns-use-microtargeting-to-attract-supporters.html.
 Wolfie Christl, “Corporate Surveillance in Everyday Life,” Cracked Labs, June 2017,
 Noam Cohen, “As Data Collecting Grows, Privacy Erodes,” New York Times, 15 February 2009, http://www.nytimes.com/2009/02/16/technology/16link.html.
 Charles Riley and Omar Khan, “India busts bogus call centres for posing as the IRS,” CNN Money, 6 October 2016, http://money.cnn.com/2016/10/06/news/india-irs-scam-arrests/index.html.
 Karen Turner, “The Equifax hacks are a case study in why we need better data breach laws,” Vox, 14 September 2017, https://www.vox.com/policy-and-politics/2017/9/13/16292014/equifax-credit-breach-hack-report-security.
 Mike Levine and Jack Date, “22 Million Affected by OPM Hack, Officials Say,” ABC News, 9 July 2015, http://abcnews.go.com/US/exclusive-25-million-affected-opm-hack-sources/story?id=32332731.
 “Inside the Mufid Elfgeeh Investigation,” FBI, 16 May 2016, https://www.fbi.gov/news/stories/inside-the-mufid-elfgeeh-investigation.
 Andrea Peterson, “The debate over government ‘backdoors’ into encryption isn’t just happening in the U.S.,” Washington Post, 11 January 2016,
 Sierra Webb, “Google & Facebook Dominate US Ad Share Market,” StrataBlue, 3 November 2017, https://stratablue.com/google-facebook-dominate-share-market/.
 Maegan Vazquez, “Scam PACs Target Conservatives with a ‘Dinner With Trump’ Message on Facebook,” International Journal of Research, August 2016, http://ijr.com/2016/08/682420-scam-pacs-target-conservatives-with-a-dinner-with-trump-message-on-facebook-read-the-fine-print/.
 Ashley Feinberg, “This Is Almost Certainly James Comey’s Twitter Account,” Gizmodo, 30 March 2017, https://gizmodo.com/this-is-almost-certainly-james-comey-s-twitter-account-1793843641.
 Anja Lambrecht and Catherine Tucker, “When Does Retargeting Work? Information Specificity in Online Advertising,” MIT, 2 December 2011, http://ebusiness.mit.edu/research/papers/2011.12_Lambrecht_Tucker_When%20Does%20Retargeting%20Work_311.pdf.
 John Bellamy Foster and Robert W. McChesney, “Surveillance Capitalism,” Monthly Review 66, no. 3 (July–August 2014), https://monthlyreview.org/2014/07/01/surveillance-capitalism/.
 Translated from German by Dr. Axel Harneit-Sievers, the Heinrich Böll Foundation.
 Martin Fowler, Datensparsamkeit, 12 December 2013, https://martinfowler.com/bliki/Datensparsamkeit.html.
 Valerie Ross, “Forget Fingerprints: Law Enforcement DNA Databases Poised To Expand,” PBS.org, 02 Jan 2014, http://www.pbs.org/wgbh/nova/next/body/dna-databases/.
Executive Summary The easy abundance of personal data online has created a dependence on data and an insatiable appetite for more. Data can provide useful insights, and governments and businesses have legitimate uses for personal information. However, insufficient limits and controls on the amount of data collected and the period for which it is stored […]
These developments lead to a series of questions. What further impact will AI have on the profession and its constituents? How will the changes affect the process of delivery of legal services and the work of judges and lawyers? Will the disruption result in the creation of new kinds of jobs and, as a corollary, make some existing jobs redundant? How must legal educational institutions adapt their courses to deal with changing market needs? Will AI enhance the goal of access to justice?
This essay will have three sections. The first will detail the constituent elements of AI technologies and why they have generated interest, as well as scepticism, among many in the legal profession. The second section will paint a non-exhaustive landscape of solutions by mapping recent AI-driven legal products. The last section will look at AI’s impact on the profession and the political, economic and ethical challenges that are emerging as a result. It will examine the compulsions for advancing these technologies.
Unpacking Artificial Intelligence
Unpacking the term ‘artificial intelligence’ is relevant, because it has several competing and overlapping definitions. Perhaps, in its most basic sense, it means the capacity of a computer to perform tasks commonly associated with human beings. This includes the ‘ability to reason, discover meaning, generalise, or learn from past experience’ and thereby find patterns and relations to respond dynamically to changing situations. Three key aspects can be adduced in AI systems (or cognitive computing systems): the capacity to find and gather information; the ability to analyse and make sense of the information gathered; and thereafter, to generate and make decisions.
Within the field of AI systems, there exist several technologies. These include natural language processing, where computer systems are able to analyse text to generate content, classify information and even answer questions; machine learning, where computer systems through working with a large volume of data are able to discover patterns without precise and clear programmed instructions; speech recognition, which involves the conversion of speech to text functions and vice versa; expert systems which have the ability to perform tasks which need the kind of expertise only humans have possessed until now; and vision, which includes the ability to recognise and analyse different images.
In the past two years, the conversation on AI has been magnified within legal circles. It began with the development of an IBM Watson powered robot called ROSS, the world’s first AI lawyer which, through its cognitive computing capabilities, could provide answers to research questions by mining data and deciphering trends and patterns. Earlier this year, Cyril Amarchand Mangaldas became the first Indian law firm to employ software that uses AI to identify, extract, analyse and evaluate clauses and other information from contracts and other legal documents. The recognition that AI will have an impact on the legal industry has been growing. In a widely cited study by Altman Weil of 320 law firms in the US, it was found that in 2015, 47 percent of the respondents felt that paralegals would be replaced by AI powered products, compared to 35 percent feeling the same way in 2011. The survey also showed that 35 percent of respondents in 2015 felt that AI could replace the work of first-year associates, compared to 23 percent four years previously.
These advancements in technology prompted Klaus Schwab, founder and Executive Chairman of the World Economic Forum, to suggest that the world is on the cusp of a “fourth industrial revolution” fuelled by technology, which is combining physical, digital and biological worlds, and which is likely to create an adverse impact on jobs and security, and increase inequality, unless organisations learn to adapt. An Oxford University report suggested that over 47 percent of US jobs were at risk due to automation, and in China the figure could be as high as 77 percent.
While discussions on the use of AI in law have been taking place in academia for more than 30 years, their influence into actual legal practice has happened only recently. Some of the key factors driving the present resurgence include the increase of computing power (exemplified by Moore’s law, which states that the processing power of computers doubles every two years); the availability of a large volume of data upon which technologies can be tested and developed; the evolution of new and more effective algorithms which have synergistic connections with access to better hardware and also larger data sets; and finally, AI’s access to capital, which has exploded in the last couple of years with 200 AI start-ups raising US$1.5 billion in equity funding.
AI-driven Models for the Legal profession
In order to ascertain the potential impact that such advancements in technology can have on the industry, a scoping of existing technologies is useful. The major areas in which AI is being utilised include in the conduct of predictive analysis, legal research, e-discovery document review, and self-help, as well as for administrative assistance and to enhance cyber security.
Uncertainty is commonly associated with litigation because of the number of variables such as the forum, the judge and the evidence that can have an impact on a case. In order to address the constraints that emerge with risk in the profession, a number of solutions have emerged. Lex Machina is an example of a legal analytics platform that uses large volumes of litigation information to provide insights into the workings of judges, lawyers, parties and the cases before them. Lex Machina emerged as a platform initially focused on providing data-driven strategies for intellectual property cases, so that risks could be assessed and decisions taken on the scope and strategy for the litigation. Premonition AI, another legal analytics firm, provides information on the effectiveness of litigators before particular judges by mining what it claims is the largest litigation database in the world. It aims to determine the track record of judges and lawyers to ensure that parties make choices based on empirical insights. These platforms are designed to reduce the inefficiencies associated with litigation by enabling lawyers to develop strategies, predict outcomes and offer data driven actionable solutions to their clients.
Another area where there appears to be great scope for innovation is in terms of the manner in which traditional legal research is undertaken. As research involves multiple processes of identification, categorisation and review of information, the need for tools to enhance the process is clear. The purpose of technologies, such as the one developed by ROSS Intelligence, is to help lawyers find cases and secondary material using natural language processing. The tool allows researchers to ask questions in plain English and thereafter scans and examines its database to provide answers and readings from leading cases, and articles, relevant to the question. As the company’s CEO explains, “ROSS pretty much mimics the human process of reading, identifies patterns in text, and provides contextualised answers with snippets from the document in question”. Its focus is primarily on bankruptcy and insolvency law. Older legal research platforms like Westlaw and Lexis Nexis are also incorporating natural language processing into their searches.
AI is also making inroads in terms of innovations regarding discovery of facts. OpenText is a platform that uses analytics and machine learning to identify facts that are central and important for litigation, as well as for compliance and governance. It allows the user to filter and focus the research by identifying facts and relationships that are important from the context of the case, through analysing communications as well as other information such as terms, sources and types of files. 
While legal research requires critical engagement and analysis, document review is often seen as a task that is both time-consuming and mundane. Moreover, it is prone to human error on account of the large volume of information that often has to be processed. Kira is an AI powered platform designed to identify and analyse data by extracting information such as clauses and concepts from contracts and thereby allow the user to analyse trends and patterns between documents. It is being used in due diligence, contract analysis and lease abstraction, to ascertain the risks and challenges that can emerge, by comparing the documents concerned with vast volumes of previously assembled data. Ravn, another AI platform, organises, analyses and summarises documents, with the aim of allowing organisations to increase efficiency and productivity while reducing risk.
In addition to tackling these different aspects central to the litigation process, whether predictive analysis, legal research or document review, AI has also been used for the development of chatbots that can respond to queries from users in the form of self-help tools. Lawbot is one such example of a solution designed to help victims of sexual assault in the UK. Another application DoNotPay helps people with filing for compensation for delayed flights, as well as in deciding whether or not they should pay parking tickets. 
Further, AI powered platforms have also been used by the industry to power their cyber security and build more robust analysis of the threats companies face. It can also do the job of administrative assistants, who help organise tasks such as scheduling meetings, making travel arrangements or managing expenses. 
Given these developments, the next section will attempt to situate the impact of AI on the legal profession by looking at the responses to technology, the impact on ways of working, and the regulatory challenges.
Transforming the Profession?
A mixed response to using technology
The increasing pervasiveness of technology in the legal industry is also driven by the fact that clients expect value for money, lower risks and greater efficiencies in terms of time and outcomes from legal providers. According to a Thomson Reuters study, there has been a 484 percent increase in the number of patents filed globally with respect to new legal services technology in the last five years, with 579 patents filed in 2016, up from just 99 in 2012.  This signifies that there is great demand from law firms for new types of legal services such as those that provide litigation support, risk analysis, due diligence and review services, to keep themselves competitive in terms of cost, precision and efficiency of services. It also indicates that clients expect new legal services to keep up with the digital age, reduce risk and costs while improving communication between consumers and providers of legal services. The economic case for the use of technology – in addition to the increasing capacities for the firm – is that it enables monotonous and tedious tasks to be completed by AI solutions, allowing lawyers to focus their energies on strategic and high value functions.
The introduction of these new technologies, however, will take time and planning. It will require a buy-in by all stakeholders, as well as clearly thought out implementation and integration strategies. In terms of law firms, a new survey in 2017 by Altman Weil found that 25.9 percent of US law firms surveyed were wholly ignorant of the role of AI in the legal industry, 7.5 percent were already using some of the tools, while 28.8 percent were exploring options. However, the remaining 37.8 percent of firms, despite knowledge of the developments in AI, were not using it in their operations. The large percentage of disaffection indicates a continued scepticism about AI in the law firm community and the reluctance to fully embrace the technology. Law schools, a key constituency in the need to facilitate an embrace of the changes brought by AI, are having to adapt their courses and introduce elements such as ‘tech audits’ to prepare their students to become more tech literate.
New ways of working
Among the key reasons holding back the use of AI, is the fear that technologies driven by it will result in a loss of jobs for lawyers. Jordan Furlong, a leading strategic legal consultant, has argued that junior lawyers should be open to technologies because their purpose is to add value, not to take it away from the lawyer. Furlong advances that if he was a young lawyer, he would like to know the following: “First, what is happening in the legal marketplace? Secondly, tell me what kind of skills, attributes and knowledge I need and who out there is going to engage me for that kind of work. And third, I need help being assimilated”. He states that much of the debate so far has been on automation and not augmentation, the former being where the machine does everything relating to tasks like due diligence, or document or case review, as opposed to the latter where the machine facilitates the lawyer’s work. While the former may result in job losses, a lawyer’s job also includes several other elements such as ‘strategy, creativity, judgement and empathy’ that are needed for tasks such as advising, negotiating and appearing in court, none of which can yet be automated. Thus, although the advent of technology is introducing a new vocabulary to the legal profession, with terms such as data analytics and algorithms becoming commonplace, and this could be unsettling, lawyers, law schools and regulatory bodies should see these changes as enablers rather than threats and should embrace such change.
According to the Canadian Bar Association Legal Futures Initiative, the key to adapting the profession to the future is innovation, which goes beyond mere development and adoption of new technologies. Innovation would require rethinking the ways in which lawyers are trained, how the profession is regulated and how the public is protected.  In a dynamic new environment, where several legal tasks are rendered redundant, the report suggests that new disciplines will emerge, including those of legal knowledge engineers who can build online legal systems; legal process analysts who can develop means of distributing complex legal work in organisations; legal support system managers who develop and manage processes such as workflow systems; and legal project; and risk managers who introduce project management and monitoring techniques. Richard Susskind, author of Tomorrow’s Lawyers: An Introduction to Your Future, argues that legal work is going to evolve and transform from being a ‘bespoke’ service to one which, through a series of standardisations, systematising and packaging, will become a ‘commoditised’ service. This will result in a decomposition of legal work both in terms of litigation, to include disaggregated tasks such as ‘document review, legal research, project management, litigation support, (electronic) disclosure, strategy, negotiation, tactics, advocacy’ as well transactional work, which would include ‘due diligence, legal research, transaction management, negotiation, bespoke drafting, document management, legal advice, and risk management’. These changes will result in new ways of doing work in terms of how practices are structured, fees are calculated, projects are managed and clients engaged, as well as flexible models for working, including part law-part technology practices, legal hubs and marketplaces, virtual firms, etc.
With such rapid developments, several issues are likely to arise on the regulatory front. The first is the question of bias. Several courts in the US have started to use AI products for sentencing to determine the recidivism of persons convicted. While algorithms are usually coded to be neutral, the programmers who code them may not be so, or are likely to operate with some assumptions that may lead to bias. As Kranzberg first law of technology states, “Technology is neither good nor bad; nor is it neutral”. A study by US news organisation Pro-Publica, of a risk assessment tool called COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) bears testimony to this fact. It found that the tool was biased against black defendants who were incorrectly judged to be at high risk of recidivism compared to white defendants who were marked as low risk. This raises the question of more transparency in the methodology being used to develop such technologies, and also a degree of accountability and responsibility for the findings when they do not adhere to principles of fairness and equity.
The second related challenge is that of unveiling the ‘black box’ of who is responsible for how an AI driven product operates. In many instances, the product is a result of multiple codes that analyse data and make decisions – leaving the developers unsure of how each component works, or how the tool makes its decisions. This results in many questions regarding the elements of decision-making processes, the rationale behind particular connections and the reasoning for certain programming outcomes. There have now been calls for giving robots an ‘ethical black box’ which will ensure that these products keep track of their decisions and can explain how they were arrived at.
The third challenge is related to data confidentiality. It is essential that, as consumers of large volumes of data, AI service providers are also held to standards regarding how they access and use data.  Finally, though the field of AI is rapidly developing, there is need for the legal regulatory environment to develop a new vocabulary of guidelines and frameworks, not in order to stymie development in this field, but to evolve new ways to think about criminality, ethics and responsibility from a bottom-up understanding of how the field is developing, as well as one that is self improving. This too could perhaps be driven by AI.
The influence of AI on the legal profession presents a very exciting means of improving efficiencies through legal research tools and administrative assistants; of doing good through the use of chatbots as self-help tools; of reducing risk and uncertainty through predictive technologies and e-discovery tools, and as a tool for compliance and governance. Rather than mere automation, it has the potential to augment the profession, as it will open up new kinds of jobs, specialisations and influences for lawyers, law schools and regulatory bodies through their association with disciplines such as engineering, finance, management, political science and risk assessment. It however, requires members of the legal profession to be willing to embrace change, seeing AI not as a threat but as a tool through which their productivity, opportunity and potential can increase. Finally, given that this is a space seeing rapid development, it is essential that regulatory frameworks are developed to reduce the opaqueness and secrecy that surrounds the development of these technologies, the use of algorithms and data. It is vital that these products meet standards of transparency and accountability for their performances.
 “Q&A: Richard and Daniel Susskind on the Future of Law | Canadian Lawyer Mag,” http://www.canadianlawyermag.com/legalfeeds/author/na/qanda-richard-and-daniel-susskind-on-the-future-of-law-6828/.
 “Artificial Intelligence |Encyclopedia Britannica,” https://www.britannica.com/technology/artificial-intelligence.
 “Cognitive Computing: Transforming Knowledge Work,” Thomson Reuters, January 24, 2017, https://blogs.thomsonreuters.com/answerson/cognitive-computing-transforming-knowledge-work/.
 “ Artificial Intelligence in Law: The State of Play- Neota Logic”
 “Demystifying Artificial Intelligence,” DU Press,
 “Why Artificial Intelligence is the Future of Growth _ Accenture”
 “World’s First Robot Lawyer ROSS Hired by US Law Firm – Livemint,” http://www.livemint.com/Politics/bQNLHR96A5G4Kvg81JwWFM/Worlds-first-robot-lawyer-ROSS-hired-by-US-law-firm.html.
 Kian Ganz, “Cyril Amarchand ‘First’ to Sign up for Machine Learning Contracts Software, but Is AI the Death or Future of Lawyers?,” https://www.legallyindia.com/law-firms/cyril-amarchand-first-to-sign-up-for-machine-learning-contracts-software-but-is-ai-the-death-or-future-of-lawyers-20170131-8267.
 “ Law Firms in Transition-2015”http://www.altmanweil.com/dir_docs/resource/1c789ef2-5cff-463a-863a-2248d23882a7_document.pdf
 “The Fourth Industrial Revolution, by Klaus Schwab,” World Economic Forum, https://www.weforum.org/about/the-fourth-industrial-revolution-by-klaus-schwab/.
 “Robots Will Steal Your Job: How AI Could Increase Unemployment and Inequality,” Business Insider,http://www.businessinsider.in/Robots-will-steal-your-job-How-AI-could-increase-unemployment-and-inequality/articleshow/50997302.cms.
 “An AI Law Firm Wants to ‘Automate the Entire Legal World,’” Futurism (blog), January 30, 2017, https://futurism.com/an-ai-law-firm-wants-to-automate-the-entire-legal-world/.
 “Demystifying Artificial Intelligence.”; Mark Purdy and Paul Daugherty, “Accenture: Why AI Is the Future of Growth,” n.d., https://www.accenture.com/lv-en/_acnmedia/PDF-33/Accenture-Why-AI-is-the-Future-of-Growth.pdf.
 “What’s Driving the Machine Learning Explosion?” https://hbr.org/2017/07/whats-driving-the-machine-learning-explosion.
 “Artificial Intelligence Explodes: New Deal Activity Record For AI,” https://www.cbinsights.com/research/artificial-intelligence-funding-trends/.
 “Lex MachinaTM | LexisNexis,” LexisNexis® IP Solutions (blog), http://intl.lexisnexisip.com/products-services/intellectual-property-solutions/lexisnexis-lexmachina.
 “Premonition | Legal Abalytics | Law | Court Analasys | Litigation,” https://premonition.ai/law/#1473189571824-3d711aff-1fa9.
 “ROSS Intelligence,”
“YC’s ROSS Intelligence Leverages IBM’s Watson To Make Sense Of Legal Knowledge | TechCrunch,” https://techcrunch.com/2015/07/27/ross-intelligence/.
 “Axcelerate EDiscovery & Investigations Solutions – Recommind,”, https://www.recommind.com/axcelerate-ediscovery-product-page/.
 “Kira Systems | Machine Learning Contract Search, Review and Analysis,” https://kirasystems.com/.
“RAVN – IManage,”
 “Cambridge Students Build a ‘Lawbot’ to Advise Sexual Assault Victims | Education | The Guardian,” https://www.theguardian.com/education/2016/nov/09/cambridge-students-build-a-lawbot-to-advise-sexual-assault-victims.
 “Rise of the Robolawyers – The Atlantic,” https://www.theatlantic.com/magazine/archive/2017/04/rise-of-the-robolawyers/517794/.
“Preparing for Artificial Intelligence in the Legal Profession,” https://www.lexisnexis.com/lexis-practice-advisor/the-journal/b/lpa/archive/2017/06/07/preparing-for-artificial-intelligence-in-the-legal-profession.aspx.
 “Thomson Reuters Analysis Reveals 484% Increase in New Legal Services Patents Globally,” Thomson Reuters, August 16, 2017, https://www.thomsonreuters.com/content/thomsonreuters/en/press-releases/2017/august/thomson-reuters-analysis-reveals-484-percent-increase-in-new-legal-services-patents-globally.html.
 “Global Legal Tech Is Transforming Service Delivery,” https://www.forbes.com/sites/markcohen1/2017/08/29/global-legal-tech-is-transforming-service-delivery/#2b832c2b1346.
 “Artificial Intelligence Disrupting the Business of Law,” accessed November 5, 2017, https://www.ft.com/content/5d96dd72-83eb-11e6-8897-2359a58ac7a5.
 “Artificial Intelligence | Canadian Lawyer Mag,” http://www.canadianlawyermag.com/author/sandra-shutt/artificial-intelligence-3585/.
 “ Law firms in transition-2017” http://www.altmanweil.com//dir_docs/resource/90D6291D-AB28-4DFD-AC15-DBDEA6C31BE9_document.pdf
 Samar Warsi, “How a Law School Is Preparing Its Students to Compete Against AI,” Motherboard, April 14, 2017, https://motherboard.vice.com/en_us/article/d7b8my/law-school-automation-ai-lawyers-york-university.
“How Will Artificial Intelligence Affect the Legal Profession in the next Decade?,” http://law.queensu.ca/how-will-artificial-intelligence-affect-legal-profession-next-decade
 “A.I. Is Doing Legal Work. But It Won’t Replace Lawyers, Yet. – The New York Times,” https://www.nytimes.com/2017/03/19/technology/lawyers-artificial-intelligence.html?_r=2.
 “Artificial Intelligence Law | Legal AI Solutions,” TOPBOTS, June 10, 2017, https://www.topbots.com/automating-the-law-a-landscape-of-legal-a-i-solutions/.
“CBA legal futures initiative- Futures: Transforming the delivery of legal services in Canada” http://www.cba.org/CBAMediaLibrary/cba_na/PDFs/CBA%20Legal%20Futures%20PDFS/Futures-Final-eng.pdf
Richard Susskind, Tomorrow’s Lawyers: An Introduction to Your Future, 1 edition (Oxford, United Kingdom: Oxford University Press, 2013).
 “ The Future of Law and Innovation in the Profession” https://www.lawsociety.com.au/cs/groups/public/documents/internetcontent/1272952.pdf
 “Artificial Intelligence | Canadian Lawyer Mag.”
 Melvin Kranzberg, “Technology and History: ‘Kranzberg’s Laws,’” Technology and Culture 27, no. 3 (1986): 544–60, https://doi.org/10.2307/3105385.
 “How We Analyzed the COMPAS Recidivism Algorithm — ProPublica,” , https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm/.
“The Dark Secret at the Heart of AI – MIT Technology Review,” https://www.technologyreview.com/s/604087/the-dark-secret-at-the-heart-of-ai/.
 “Give Robots an ‘Ethical Black Box’ to Track and Explain Decisions, Say Scientists | Science | The Guardian,”
 “The Ethics of Artificial Intelligence in Law,” https://www.digitalpulse.pwc.com.au/artificial-intelligence-ethics-law-panel-pwc/.
 Ibid; “Preparing for Artificial Intelligence in the Legal Profession.”
 Mark Purdy and Paul Daugherty, “Accenture: Why AI Is the Future of Growth.”&p[url]=http://www.digitalpolicy.org/transforming-legal-profession-impact-challenges-artificial-intelligence/&&p[images]=http://www.digitalpolicy.org/wp-content/uploads/2017/11/Artificial-Intelligence-150x150.jpg" target="_blank">
By Siddharth Peter de Souza on 16 November, 2017
Introduction In this era of globalisation, technology is playing a key role in the delivery of legal services, the nature of competition, and the demands and expectations of clients. Artificial intelligence (AI) has been driving some of the biggest changes by providing new ways for law firms to conduct their business, for lawyers to […]
It is now well known that algorithms can lead to unintended consequences (for instance, in the case of big-data mining).[i] This piece attempts to focus on emerging legal issues in the use of Collaborative Filter Algorithms (CFAs) to generate recommender systems that have now become the tech industry’s staple diet.[ii] Most of these issues, as this paper argues, come from not taking into account a fundamental truth about online services today, which is that they tend to interact differently with different users (which again, is a necessary consequence of “personalised feeds” like the ones used by Facebook, Google or even websites such as Netflix and Amazon). The failure to recognise this phenomenon means that the extant laws, policies or even conversations surrounding the relationship between online service providers and users tend to paint all consumers with a generic brush, not realising that “users” constitute many dynamic and stratified classes who may each have different claims to make from these service providers and different levels of bargaining strength with which to enforce their rights.
What is Collaborative Filtering?
Collaborative filtering (CF) is a popular recommendation algorithm that bases its predictions and recommendations on the ratings or behaviour of other users in the system. The fundamental assumption behind this method is that the opinions of other users can be selected and aggregated in such a way as to provide a reasonable prediction of the active user’s preference. Intuitively, they assume that if users agree about the quality or relevance of some items, then they are likely to agree about other items as well. Thus, as Ekstrand, Riedl and Konstan explain, if a group of users likes the same things as Mary, then, among things she has not seen yet, Mary is likely to like the things the group likes.[iii] Therefore, CF is a method of making automatic predictions (filtering) about the interests of a user by collecting preferences or taste information from many users (collaborating).[iv] This creates a “network effect,” which is typical to the software industry. In effect, most such products display and rely on positive network effects, which means that more usage of the product by any user increases the product’s value for subsequent users.
However, this also means that there is a “diminishing marginal utility” with respect to the data that is provided by each new user, i.e., the quantum of change caused in the accuracy of the algorithm diminishes as more and more users have their data added to the set. Thus, the value per user’s data plateaus after a point. As explained in the graph below, there is a rapid rise in accuracy when the first few users’ data (i.e., the data belonging to the HVUs) is assimilated into the system as they provide, hypothetically, around 98 percent of the predictive accuracy of an algorithm. After a point, however, the gains in accuracy by the data provided by each new user becomes negligible.
This is further amplified by the use of the matrix factorisation approach, which is often claimed to be the most accurate approach to reduce the problem from high levels of sparsity in a recommender system’s database.[v] This technique allows recommender systems to mathematically extrapolate the behaviour of potential users using existing (but sparse) user data.[vi] Simply put, this means that these algorithms need only to rely on a very small set of initial users to achieve accuracy. Consequently, however, it also means that the data belonging to these initial users becomes that much more valuable for the system. Thus, what we effectively have here is a class of “high-value” users who provide the most crucial data for ensuring accuracy, but are on the wrong side of the value chain when it comes to the accuracy of their feed. In other words, every single piece of content that is ever uploaded to any platform using a CFA is pushed purely on an experimental/data-collection objective to a randomly selected group of users, who then generate a solid and accurate feed for everyone else.
Understandably then, those advocating for major corporations using CFAs continue to remain tight-lipped about “testing” content on users while assuring users of the prediction accuracy of their recommender systems.[vii] No one seems to acknowledge that their systems provide varied levels of accuracy depending upon whether the user is one of its most voracious. The one who spends a significant amount of time on the platform in search of new content (e.g., the user who compulsively updates his Facebook news feed) are likely to get the most inaccurate results (which, one might say, is mostly the result of that user’s own actions). However, it is their reactions that are directly utilised to bring in other users (who, it may stand to reason, are not as voracious as this compulsive user, and could be on the fence about the whole feature/service). This silence is unsurprising, because if it became clear that we, the users, create value for each other at different points of time and for different pieces of content, then those users who inevitably end up being HVUs for a significant amount of content could start expecting to share a slice of the (metaphorical) revenue pie in a manner and quantum separate from other users. Identifying such users will, no doubt, take some data analysis.
However, for example, if one were to map just the first 1,000 or so users to whom the most popular/trending video of the day was initially pushed to, it is possible, over a year’s analysis, to be able to isolate a small group of a few hundred users who were the HVUs of multiple trending videos through the year (the “most valuable user class”). If the number is anywhere nearly as significant as suspected, it could lead us to a completely new era with regards to the way people structure contracts with these online service providers using recommender systems.
At the same time, companies often do recognise that there exist such problems with collaborative filtering. The generic solution that they choose to apply is a method called “combined filtering.” This is a form of dual filtering that has aspects from both a collaborative filter and a content-based filter. Using this filtering process, users are divided into various content categories and the relevant content is pushed to the relevant categories. For example, if, from data a user has provided by using the application, it can be ascertained that their main interests lie in the area of sports and music, they will be placed into those categories, along with other users who have similar interests. While this process, to a certain extent, solves the utter arbitrariness of applying only the collaborative filter, it fails to address the central issue, i.e. the HVUs (who are usually also the first users) are still left worse off. This is because, within these content categories, the heavy users still end up being the guinea pigs for the application and the content it chooses to run, and often end up receiving a worse stream of content.
Consider the following example. Assume that there are 200 users within Facebook’s “politics” content category. In this case, the heavy users are likely to get pushed new content first, purely because they access this application and the content far more and are available at more times on the application to have content sent to them. Therefore, the heavy users end up being the first users of the content and are left worse off. This is because as the first users, their content feed is considerably more inaccurate than the user who receives their content last (for the purpose of this example, the 200th user in the category) because their content is far more streamlined. On the other hand, the first user (who is also a heavy user) continues to receive content that they may or may not like. However, Facebook itself benefits more from the first user’s data than the 200th user’s data, and yet, the benefits the users receive do not reflect this gap.
That the law continues to treat all users alike leads to a range of concerns. Given that accuracy is built up and improves over time, the factual scenario is such that the HVUs of the service are left at a disadvantage since there is no previous user to help further the algorithm that will then provide recommendations. This means that the HVU, who signed up in the same manner and agreed to the same T&Cs as other users, is left at a disadvantage even though the HVU, being the first user, is a loyalist. In the developing field of technology and privacy, pigeonholing users as one homogenous group, without understanding the need for stratification, has led to a legal scenario where the needs of the different classes of users are not catered to.
The outcome of this lacuna can be cured if users are made aware that they are not always getting the accurate recommendations that the service promises them. Having such disclosure norms in place could lead to multiple outcomes. Heavy users of such services, e.g. Facebook or BookMyShow, could demand that they be informed when their data becomes a part of the “most valuable users class” for generating ratings over particular content pieces over a period of time, such as movies or news items over a six-month or a one-year period. They may additionally demand that they be suitably compensated from the proceeds, if any, earned by the service provider from the relevant content piece/s. For example, if a video titled “Gangnam Style” was to become YouTube’s most popular video on a particular day, the first few thousand users (as opposed to the millions after) may have a legitimate claim that YouTube identify them (using their accounts) and push an appropriate share of revenue earned from ads on the video for that day to that class of users, especially if it turns out that, over a 365-day period, these users were also the HVUs for, say, 50 or 100 of those videos (the exact thresholds are not the central focus in this paper).
Similarly, other countries such as India and the Philippines also have laws that ignore the very possibility of user stratification. India has the Information Technology Act, 2008 and the Philippines relies on its Data Privacy Act, 2012. Both these laws aim to protect the fundamental human right to privacy without deliberating over the extent to which this will be possible if the statutes do not correctly identify issues, such as the first user’s disadvantage, that take shape in this day and age.
Canada, however, appears to be slightly ahead of the curve in this matter, owing to several Reports of Findings that the Office of Privacy Commissioner of Canada (“OPC”) has released, addressing issues such as default privacy settings[ix] and social plug-ins.[x] In addition, the OPC has also released findings indicating that information stored by temporary and persistent cookies is considered to be personal information and, therefore, subject to the Personal Information Protection and Electronic Documents Act, 2000 (PIPEDA). Although this is more than what other countries have done to further the cause of online consumerism, there is still no regulation that deals with the specific class of people that subscribe to the same terms of service as others but still receive service that is of relatively lower quality.
Finally, no discussion of data privacy regulations can be complete without a mention of the European Union (EU). The EU Data Protection Directive was implemented in 1995 and will be superseded by the General Data Protection Regulation, adopted in April 2016. This Regulation, which will be enforceable starting 25 May 2018, by way of regulation number 32, states, “Consent should be given by a clear affirmative act establishing a freely given, specific, informed and unambiguous indication of the data subject’s agreement to the processing of personal data relating to him or her… Silence, pre-ticked boxes or inactivity should not therefore constitute consent.”[xi]
Although this regulation, too, does not specifically talk about the disadvantaged class of users, it does allow for ISPs to have separate dialog boxes, which could inform such users of this drawback, since the truth that the services react differently with different users is one that must be explicitly mentioned and brought to the notice of all its users. Without doubt, the EU and the legislations of other countries still lack in as much as they do not talk about the algorithm. However, the aforementioned interpretation of this regulation may be a good place to begin.
Similar issues pervade through the T&Cs of various recommender systems. These T&Cs become an important source of legal discussion as online services are regulated by contractual obligations owed by parties (the service providers and the user) to each other.
However, as far as the T&Cs of the ISPs are concerned, the picture painted is very different from the picture showcased. In the garb of policies such as “We also use this information to offer you tailored content – like giving you more relevant search results,”[xii] it seems there is a case of differential treatment wherein “you” are no longer the central target of a better/more accurate feed. At the very least, this clause ought to be reworded to: “We also use this information to offer others tailored content, like giving them more relevant search results just as we use their information to give you better results.”
The lacuna that exists in the legislations, alongside the inaccuracy in the terms of major corporations using recommender systems, is a matter of concern and exemplifies the disturbingly urgent need for the legislature to make laws that fill this vacuum, so that the industry ensures full disclosure to each of its users. This, therefore, is an apt emerging issue of tech law. Further, this also means that there is no clear consent obtained from users about using their data to benefit others. For example, unlike beta testing, where users who download the beta version of an app are fully aware that their reaction are being monitored and will be used to better the recommender system, CFA-based systems work dynamically, which means that a new set of predictive data is generated per new piece of content. Thus, there is no one-time collection of data, and the data about how users are reacting is collected and collated continuously.
My heartfelt thanks to Mr. Manish Singh Bisht (Head of Technology, Inshorts), Ms. Shreya Tewari (4th year student at RGNLU, Patiala) and Mr. Sudarshan Srikanth (2nd year student at Jindal Global Law School, Sonepath) for all their assistance in helping me understand the technology at work and with legal research into the multiple jurisdictions and documents examined in this paper. I am grateful for their help.
[ii] See https://www.xcede.co.uk/blog/data-science/the-rise-of-the-recommender-system, accessed October 5, 2017.
[iii] Michael D. Ekstrand, John T. Riedl and Joseph A. Konstan, “Collaborative Filtering Recommender Systems,” Foundations and Trends® in Human–Computer Interaction 4, no. 2 (2011): 81–173, http://dx.doi.org/10.1561/1100000009, accessed October 5, 2017.
[v] Dheeraj Kumar Bokde, Sheetal Girase, Debajyoti Mukhopadhyay, “Role of Matrix Factorization Model in Collaborative Filtering Algorithm: A Survey,” International Journal of Advance Foundation and Research in Computer (IJAFRC) 1, no. 6 (May 2014).
[vi] Meng Xiaofeng and Ci Xiang, School of Information, Renmin University of China, “Big Data Management: Concepts, Techniques and Challenges,” Journal of Computer Research and Development 01, 2013.
[vii] https://code.facebook.com/posts/861999383875667/recommending-items-to-more-than-a-billion-people/, accessed October 5, 2017.
[viii] Warren and Brandeis, “The Right to Privacy,” HARV. L. REV. 4, no. 193 (1890).
[ix] PIPEDA, Report of Findings, #2012-001.
[x] PIPEDA, Report of Findings, #2011-006.
[xi] Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016, “The Protection of Natural Persons with Regard to The Processing of Personal Data and On the Free Movement of Such Data,” repealing Directive 95/46/EC (General Data Protection Regulation) Official Journal L. 119, no. 1 (2016).
By Dibyojyoti Mainak on 24 October, 2017
Abstract This paper examines how high-value users (HVUs), who are often the first to test out new content, such as a new video, on a recommender system platform provide valuable data for these algorithms and ensure that all subsequent users have a great experience. The paper also explores how the current legal setup, including “terms […]
Unable to pivot with the unflappable grace of the seasoned and canny Gupta, the British and American media scrambled clumsily for explanations that would account for Brexit and Hillary’s defeat. Experts on print and television in both countries have since articulated a consensus that we live in an age in which truth’ does not matter to the average media consumer, and by extrapolation, to the average citizen, a condition captured by the phrase, ‘post-truth.’ ‘Post-truth’ is defined as “relating to or denoting circumstances in which objective facts are less influential in shaping public opinion than appeals to emotion and personal belief.” The phrase has a punchy, pithy ring to it and seems to capture something essential about the global zeitgeist. Yet the phenomenon it seeks to describe is a little more complex and messy than what is suggested by the crisp, succinct definition. And the post-liberalisation Indian mediascape has something important to explain about it, even as the post-truth experience of media in other societies predicts trends that are likely to be seen in India.
It is a truism—or a ‘post-truism’ if you will—that the internet has played a central role in the creation of the post-truth world. The dogged insistence of President Donald Trump’s supporters in believing information encountered online as the gospel truth, the role of fake news engendered for and circulated on the internet, especially via Facebook, in the run-up to the US elections, and the use of Twitter to whip up and legitimise dubious stories—all are testimony to the political power of the internet. Indeed, the epidemic of fake news has become a global malaise, seen in both developed and developing countries.  India, in fact, can lay claim to being a pioneer in this realm. For about two decades, the Hindu Right, broadly defined, has used the internet as an immensely effective tool for reshaping the terrain of Indian politics through a variety of strategies, one of which has been the propagation of ‘facts’ whose relationship to the truth is sketchy, at best, and non-existent at worst.
In the pre-social media era, the BJP was a first mover in using the internet to cultivate overseas Indian populations who had long been staunch supporters of Hindu nationalist ideology and projects, such as the proposed construction of the Ram Mandir on the site of the Babri Masjid.
Beyond the BJP, a number of diasporic and global Hindu organisations as well as individual adherents of Hindu nationalist ideology, used the internet, through websites, discussion groups like that of the South Asian Journalists Association, and the comments sections of South Asian online spaces like Chowk, to reinforce the core claim of Hindutva that India was essentially a Hindu civilisation, country, and state—and that Muslims and Christians were outsiders in the Hindu nation. Labelling themselves ‘intellectual Kshatriyas’ and including academics, workers in the technology industry in India and the US, professionals, and members of organisations like Hindu Student Councils in US universities, these Hindu Rightwingers were relentless in their critique of secularism and the Nehru-Gandhi family, and equally enthusiastic in propagating tales of the cruelty of Muslim rulers and details of an unacknowledged Hindu genocide. They also took it upon themselves to promote previously unheralded Hindu achievements, such as the fact that the Taj Mahal was actually a Hindu structure known as Tejo Mahalaya or to extol the virtues of cow urine as an elixir with miraculous medicinal properties.
In the age of social media, Twitter has become a battleground—disproportionately important to its relatively limited reach across the Indian population as whole—for similar battles, with WhatsApp complementing it as the source of ‘truths’ that the mainstream media supposedly doesn’t want the people to know. While a range of political parties and organizations have a presence on social media and use these tools, the Hindu Right and the BJP are the loudest and best organized of all. A recent book argues, in fact, that the BJP has an army of paid social media warriors and unpaid volunteers who tweet on specific topics and against specific targets, such as the actor Aamir Khan, who was attacked for criticising the condition of minorities under the political dispensation of the current government.
The effectiveness of the internet in shaping our post-truth world rests on factors particular to the online media form as well as on the political economy of the radically changed media landscape. In part, it is a function of a crisis of both print and televisual journalism, long in the making, with its roots in the blurring between the business and editorial functions of newspapers with the consequent erosion of editorial autonomy, the push toward a greater emphasis on entertainment, progressively smaller budgets for investigative journalism, and the shuttering of overseas bureaus. The Huffington Post model, in which people are rewarded for their intellectual and digital labor only by ‘visibility’ has further led to a devaluation of journalistic capital. In India, this has resulted in the perception that most mainstream media is ‘paid media,’ marked by corporate-editorial arrangements such as ‘private treaties’ for the promotion of corporations in which the holding media company owns a stake. Interestingly, the heavy-handed Leftist critique of this state of affairs, which caricatures all media as ‘corporate-owned’ and serving the interests of imperialist capitalism, may have exacerbated the dwindling prestige of journalism as an essential component of the constellation of civic life. The telos of this state of affairs is the idea of the journalist as ‘presstitute,’ a term widely used by the Hindu Right to dismiss media organizations it perceives as unjustly opposed to the BJP, Narendra Modi, and Hindu nationalist ideology.
Equally—if not more—importantly, it points to what I would argue is a vastly underexamined aspect of the internet as a form of media: its affective pull and force. Indeed, what the internet promises and delivers, more than ‘post-truth,’ is a different kind of truth, one that is deeply and viscerally felt to be true and one that has to be performed. This affective force is rooted in its immediacy and in the illusion it grants users of autonomy and agency, even if the conditions under which such autonomy is exercised and constrained are invisible.
In my study of online responses to the 26/11 terrorist attacks in Mumbai, I discovered that the same kind of affective attachment functioned as a basis of the authority on which people on social media made claims of empathy, belonging to the city, or the pain of victims. The moral power of affect matched if not surpassed the ethical authority of conventional notions of journalistic credibility.
Both Trump, in the US, and Modi, in India, as well as the global Right (much more than the Left) seem to have intuitively grasped the potential of these aspects of the media ecosystem. The paradox is that both of them use media in a way that purports to be unmediated, pretending to speak directly and honestly to the people, beyond the interventions of the media, unsullied by editorial intervention, free of corporate and monetary influence, and above politics itself. This is why Modi’s “Mann Ki Baat” radio addresses and Trump’s spontaneous, unpredictable tweets seem to carry a power of truth beyond the specific substantive claims they offer. It is likely that these forms of communication will become the new normal in media and politics or in the mediated politics that suffuses all aspects of contemporary existence. The challenge for media, in India or elsewhere, will be to counter it with a narrative that has as much of an affective force, without having recourse to the luxury of non-truth or post-truth.
 Mukul Kesavan, “Calling Their Bluff,” The Telegraph, November 9, 2015, accessed November 11, 2016, https://www.telegraphindia.com/1151109/jsp/opinion/story_52118.jsp.
 Amy B. Wang, “‘Post-truth’ named 2016 word of the year by Oxford Dictionaries,” The Washington Post, November 16, 2016, accessed December 5, 2016, https://www.washingtonpost.com/news/the-fix/wp/2016/11/16/post-truth-named-2016-word-of-the-year-by-oxford-dictionaries/?utm_term=.cb5c6f259876.
 Paul Mozur and Mark Scott, “Fake News in U.S. Election? Elsewhere, That’s Nothing New,” New York Times, November 17, 2016, accessed December 10, 2016, http://www.nytimes.com/2016/11/18/technology/fake-news-on-facebook-in-foreign-elections-thats-not-new.html.
 Swati Chaturvedi, I am a Troll: Inside the Secret World of the BJP’s Digital Army, Juggernaut, 2016.
 Rohit Chopra, “The 26/11 Network: Archive: Public Memory, History, and the Global in an Age of Terror,” International Journal of Communication 9 (2015), 1140–1162, http://ijoc.org/index.php/ijoc/article/viewFile/2263/1360.&p[url]=http://www.digitalpolicy.org/media-in-a-post-truth-world-lessons-from-and-for-india/&&p[images]=http://www.digitalpolicy.org/wp-content/uploads/2017/04/Mann-Ki-Baat-150x150.jpg" target="_blank">
By on 28 April, 2017
So 2016 has come and gone. Among the many casualties it claimed, perhaps the most conspicuous if not the most surprising, was the credibility of the global mainstream or credentialised media. In the United States (US), the election as president of Donald Trump, who had been given a less than 10 percent chance of victory […]
Yes. To the best of this author’s knowledge, the Aadhaar database has not been defined as “critical infrastructure” by the Indian government. The National Critical Information Infrastructure Protection Centre (NCIIPC), India’s nodal agency for this purpose, has sought to identify CII, but so far it has focused on flagging certain sectors – banking, health, energy – as “critical” databases. The UID programme, by contrast, is a cross-sectoral effort to authenticate the credentials of Indian users or consumers. At some point, the NCIIPC will seriously weigh bringing Aadhaar into its fold, but no publicly available information suggests such developments for now.
Identifying a database or sector as “critical infrastructure” is important because it is internationally accepted that CIs are not to be attacked during peace time or armed conflict. The 2015 UN Group of Government Experts (GGE) say:
“A State should not conduct or knowingly support ICT activity contrary to its obligations under international law that intentionally damages critical infrastructure or otherwise impairs the use and operation of critical infrastructure to provide services to the public” (emphasis added)
The GGE’s recommendations, endorsed by the UN General Assembly, are not binding on UN member states. They nevertheless represent the views of the most powerful “cyber” powers, including Russia and China. It is notable that the US’s election machinery, which Russia is formally alleged to have hacked during the 2016 US presidential campaign, was not classified by the Obama administration as CI at the time. Attacks on critical infrastructure are been uncommon – a US banking network, a Saudi state-run oil company, an Iranian nuclear installation, an Ukranian power plant, to name a few recent targets – given the difficulty in attributing cyber attacks to state agencies. That said, the international community is stealing moving towards declaring the targeting of CI as a grave violation of international law. If the Indian government sees Aadhaar as a gateway to its services or entitlement schemes, it should move immediately to designate UID as critical infrastructure and set up a dedicated Computer Emergency Response Team to monitor attacks or intrusions on the database.
Read also | Mandatory linking of the UHID and Aadhaar
Why would Aadhaar be attacked during an armed conflict?
Targeting the Aadhaar database would serve two purposes: first, attacking a highly centralised national database – thereby limiting the access of Indian citizens to essential services – forces the Indian government to reconsider its military options against an adversary. This could be done by a DDoS attack on Aadhaar servers, preventing legitimate devices or applications from authenticating transactions. Aadhaar data also offers valuable intelligence, which can be harvested by penetrating Aadhaar-enabled applications. For instance, the Bharat Interface for Money (BHIM) app merely requires entering the 12-digit Aadhaar number to transfer money from one account to another. Perhaps the two-factor authentication in BHIM would prevent fraudulent transfers of money. But hacking the Aadhaar database will allow an adversary to map the flow of funds in an area – thanks to BHIM – as well as its busiest banks. Based on such intelligence, it is possible to selectively attack financial networks in an Indian town (say, along the border).
Similarly, if the government intends to link tax returns to Aadhaar numbers, sensitive financial information of individuals and companies will be exposed through breaches of the UID database. A “man in the middle” attack by an actor posing as the Aadhaar authenticator, could confuse the e-filing portal to divulge information. Doomsday scenarios around Aadhaar revolve around identity theft or loss of huge sums of money – exploiting the database’s information without conducting disruptive activities is far more valuable to an adversary. Aadhaar, by linking platforms together, makes mapping and intel-gathering exercises easier.
How would an adversary attack Aadhaar databases?
An Aadhaar ecosystem requires an infrastructure layer, a data layer and an application layer. Aadhaar enrolment data, sandwiched between the base infrastructure and end user application is strongly encrypted, and therefore secure in transit. The infrastructure, however, could be owned by an authenticating user agency (like NPCI), a sub-authenticating user agency (ICICI Bank) or a “terminal device” (a Xiaomi or Micromax mobile phone).
Similarly, the application layer would be managed by non-UIDAI entities (PayTM, Jio, etc). While Aadhaar regulations require all contracting parties to “put appropriate network security in place to ensure their systems are protected from attack”, it is impossible to ensure systems-wide compliance. India’s digital supply chains are based abroad, effectively resulting in a situation where the security standards of Smartphone X differ widely from Smartphone Y. (It is worth noting that four of the top five smartphone models by market share in India are Chinese.) If an adversary assumes control of a mobile phone, the additional layer of authentication provided by a one-time password to effect Aadhaar-based transactions would be rendered useless. There is also no national encryption policy to regulate data security at the application layer. These applications rely on end-to-end protocols that encrypt financial data but not the user’s information (such as the name, telephone number, number of successful/failed login attempts, details of purchases, etc). The more these applications link together Aadhaar numbers and (unencrypted) personal information, the easier it becomes for an adversary to map the behaviour of Indian users. Based on the profile of the user/ consumer, this information can be used for counter-intelligence, extortion or blackmail.
The Aadhaar database, when matched with a database of personal information, becomes a goldmine for foreign actors to exploit and disrupt India’s digital networks. If operators of nuclear power plants require the Aadhaar numbers of employees to authenticate their entry into the complex, a breach of the UID database will render them vulnerable by exposing their daily activities to an adversary. If the “Bank of X” is known to be sustaining the financial lifeblood of a disputed border town through Aadhaar Enabled Payments, hostile actors may be tempted to shut down its servers located elsewhere. In the future, Internet of Things (IoT) ecosystems will likely be connected to Aadhaar databases – for instance, to allow traffic monitoring systems to directly deduct a fine from the motorist’s bank, her driving license/plate could be linked to an Aadhaar number, which in turn connects to a bank account. The security of IoT systems leave much to be desired, and could potentially compromise Aadhaar databases as well.
To counter these strategic threats, India’s policymakers must urgently consider:
This commentary originally appeared in The Wire.&p[url]=http://www.digitalpolicy.org/national-security-case-aadhaar/&&p[images]=http://www.digitalpolicy.org/wp-content/uploads/2017/03/aadhar-150x150.jpg" target="_blank">
The widespread adoption of Aadhaar numbers and linkages to Unique Identification (UID) programme databases for the purpose of authenticating sensitive transactions should give pause to India’s foreign policy and military planners. That Aadhaar is a centralised database, and therefore susceptible to cyber attacks, is already known. But pervasive “Aadhaar-isation” brings together systems and platforms in […]
The importance of H1B visas in relations between the United States and India is due essentially to a quirk of history. Just at the time the United States had ratcheted up sanctions on India after its second nuclear explosions test in 1998, Silicon Valley saw its first tech bubble. Sanctions effectively killed any chance of government-to-government dialogue on technology cooperation, but Indian and US companies were already engaged in moving their talent across borders to expand business operations. Bereft of any strategic vision, the H1B programme became an instrument to create a cadre of mid-level IT professionals. As research by Nikhila Natarajan of ORF indicates, the “highly skilled” workers these visa regimes were meant to attract were usually paid a median salary of $70,000 by Indian companies, comparable to that of an elementary primary school teacher in the US — making the possibility of employment in the US on an H1B visa very appealing.
As a result, employees of Indian IT corporations today make up the largest proportion of recipients of H1B visas. The H1B programme — and the L1 programme, which allows companies to transfer employees with “specialised knowledge” to the US —have been successful in creating bridges between Indian and US businesses. Yet their contribution to collaboration in research and development and strategic engagement is negligible, and has caused India to fall behind in technology innovation. On both the Indian and American side of the equation, successful bilateral technology collaboration must look beyond the H1B programme in order to tie collaboration to investment in Indian education and innovation.
This was made clear to me recently on the sidelines of a recent conference on India’s digital economy, where the Research & Development head of one of the world’s leading data analytics companies explained to me that his company’s labs in Moscow — which have a couple hundred employees — offered more value to the company than the thousands of Indian software engineers it employs in Bangalore alone. Puzzled, I asked why. The R&D chief bluntly said advancements in algorithmic efficiency and innovation were made in locations like Moscow or the firm’s location in Tel Aviv, while the India offices mostly serviced or updated finished products. With Indian expertise being relatively affordable, it makes sense to invest in “high technology” development elsewhere and hire a big team in Bangalore or Gurgaon to manage operations. In the absence of rigorous Master’s or PhD-level training programmes in the country on computer science or cyber security, he said, companies have no incentive to move frontier research to India.
My acquaintance’s dilemma is shared not only by other foreign businesses, but also by the Indian negotiators who make the case for technology cooperation between India and the United States, Israel, or Russia. One diplomat instrumental in the setting up of the India-US High Technology Cooperation Group (HTCG) referred to this as the problem of limited “absorptive capacity”. Foreign businesses may be keen to set up R&D bases in India, but struggle to find Indian engineers and scientists who can innovate for the world. On the other hand, Indian businesses — long content to tackle personnel-level issues like H1B visas — have underinvested in local talent that can convince companies abroad to confidently ink partnership agreements. As a result, those platforms nurtured by the Indian government to incubate technology sharing have all but stalled.
Take the case of the HTCG and the Defence Technology and Trade Initiative (DTTI), respectively the sister India-US initiatives to strengthen civilian and strategic technology collaboration between public and private players. Both emerged from high-level political engagement in the early 2000s between the Bush administration and the Manmohan Singh government in New Delhi. Negotiations that led to a bilateral civil nuclear agreement then were followed by a waiver from the Nuclear Suppliers Group (NSG) for India — which is not a signatory to the Nuclear Non-Proliferation Treaty (NPT) — to engage in nuclear commerce. Now that one “technology denial regime” (as many Indian analysts referred to the NSG) had fallen, it was assumed other export controls would follow suit. Indeed, in 2010, the Obama administration formally and publicly committed to supporting India’s membership in the Wassennaar Arrangement. Given the political context, the HTCG and DTTI were conceived as platforms to loosen export controls to India, and crucially, nudge American companies to work with their Indian counterparts in co-creating new technologies.
Yet both these initiatives have made only modest progress since their inception. At the time of writing, nine meetings of the HTCG have been held, with the group most recently meeting in 2014. In no small part due to its efforts, Indian industries today have a better understanding of US licensing controls, and many export restrictions too have been lifted. However, the group has not been able to make any significant headway on promoting R&D collaboration in cyber security. The DTTI, established in 2012, has only reached agreement on two projects, neither of which can be described as gamechanging projects for overall defence ties.
First, it was wrongly assumed that nuclear technology cooperation would spill over into other areas. At the time of signing the India-US civil nuclear agreement, India had a fully self-sufficient and highly advanced nuclear power programme. This is not so for digital or “cyber” technologies. Political goodwill cannot transform the nature of high technology cooperation without investments in tertiary education in India and a willingness by US companies to create intellectual property with their Indian partners. Investments in education should come from the Indian government, but US businesses can play a crucial role in building the capacity of instructors, and strengthening R&D facilities available in Indian campuses.
Second, the political impact of the L1 and H1B visa programmes has been underestimated. Indian IT corporations are today the biggest H1B visa employers, and have held bilateral collaboration in this field captive to their business concerns. Looking to sell their finished products in India, US companies too have been content with recruiting Indian talent under the immigrant visa regime. Neither US nor Indian businesses currently have an incentive to look beyond the regime — but the recent decision to suspend expedited H1B processing may change this.
And third, India and the United States have very different cybersecurity needs. Bilateral technology cooperation, thanks to the way in which H1B visa regime has been utilised, has so far focused on creating Indian supply chains to service US products, while American investment in one of the world’s fastest growing digital economies has sputtered. The Indian digital ecosystem is characterised by pervasive insecurities, due to a proliferation of cheap handheld devices, poorly tested applications, and the limited “cyber hygiene” of users. High-end products developed in Silicon Valley do not fulfil Indians’ cybersecurity needs. The H1B visa system is (correctly) agnostic about the location of skilled human resources, but also ignores the disparities in the digital economies that it brings together.
President Trump’s policies on immigration and H1B visa reform present an opportunity for both countries to begin a serious conversation on high technology cooperation. This dialogue should address the following questions:
Last week’s developments will exert pressure on New Delhi to bring the visa issue to the top of the agenda when President Trump and Prime Minister Narendra Modi meet this summer. Fortunately, the Indian government appears to be resisting efforts to make this the sole focus of high-level engagement. For its part, the Trump administration, which has legitimate concerns over the misuse of visa regimes like H1B, should explore how they can be channeled to strengthen technology cooperation in the HTCG.
This commentary originally appeared in Lawfare.&p[url]=http://www.digitalpolicy.org/h-high-technology-cooperation-not-h1b-visas/&&p[images]=http://www.digitalpolicy.org/wp-content/uploads/2017/03/Visa-150x150.jpg" target="_blank">
By on 20 March, 2017
Last Friday, US Customs and Immigration Services announced that it would be suspending the expedited processing of H1B visas beginning on 3rd April. Though this news has long been pushed off the front pages of US newspapers by President Trump’s revised executive order limiting travel from six majority-Muslim countries, the H1B decision continues to get […]
On 16 February, I analysed the Reserve Bank of India’s (RBI) data and presented that the demonetisation exercise initiated by Prime Minister Narendra Modi on 8 November 2016 led to a jump in several measures of digitisation payments as compared to the trend pre-Nov. 8.
One key measure of the ongoing digitisation of the economy is the value of Point of Sale (Debit and Credit) transactions, PoS for short. Data released by the RBI in February for only four unnamed banks showed that, relative to the previous trend, this measure of digitisation went up. The RBI has now furnished revised data for November, December 2016 and January 2017, which brings out that PoS for all reporting banks is not just four. My previous analysis of what happened after 8 November of necessity relied on the partial data furnished by the RBI. But now that the data has been revised, I can redo the same analysis and what it reveals is startling.
Contrary to the ongoing media narrative that digital payments are slipping, the reality is quite otherwise.
The media continue to indulge in misleading two or three data point analyses, which altogether neglects underlying trends.
Using the statistical model I developed, it’s possible to calculate the extent to which the now increased level of digital transactions with the full dataset relate to the underlying trends. Put simply, this is a parsimonious single variable model which postulates that any particular component of digital transaction, such as value of PoS transactions is a linear function of a constant term and the value of M3, a broad measure of the money stock, where all variables are rendered in natural logarithms.
These models are characterised by extremely sharp fits with the goodness of fit well over 90 percent in all cases and often even as high as 96 or 98 percent.
The first part in this series, based on RBI’s data, showed an increase in both PoS and IMPS by value in December 2016 relative to trend and an increase again in January 2017, again relative to trend. In other words, the absolute drop between December and January masked the reality that PoS was still above the trend the model predicted before the structural break caused by demonetisation.
The new data paint an even more dramatic picture. I have augmented the research in the original piece, which considered only PoS and Immediate Payment Systems (IMPS), and added two other crucial elements of modern digital payments, which are mobile banking transactions and National Electronics Funds Transfer (NEFT) transactions. The latter is essentially an electronic transfer directly from one bank account to another, which is seamless across the Indian payment system. Both of these new components are modelled with the same parsimonious linear regression model as described above.
The data for PoS show that December 2016 saw a whopping 84 percent increase in PoS as compared to the trend and January 2017 saw a big 72 percent increase above trend. Again, while it’s true there’s a drop in absolute level, such two data point analyses misses the fact that PoS is still way above the pre-demonetisation trend.
Likewise, for IMPS, there was a 13 percent increase in December and a 12 percent increase in January, again relative to trend.
My research shows that, for mobile banking, there was a staggering 88 percent increase in December which tapered down to a still very large 70 percent increase in January.
Likewise, for NEFT, the figures show an increase of 27 percent and 23 percent in December and January, respectively.
Clearly PoS and mobile banking showed the biggest upturn in percentage terms, relative to trend, while IMPS and NEFT showed smaller upticks. This matches common sense, as PoS and mobile banking transactions would have been natural fallbacks to fill the breach given the cash crunch.
I do not make a similar calculation, unlike media accounts for February 2017, for which there’s data for only four banks for PoS, not all the reporting banks, nor do we have complete data for the other components . Only when we have this full data can we say anything meaningful of what happened last month.
One facet of the government’s digitisation drive is the Bharat Interface for Money (BHIM), an official payments launched by Prime Minister Modi on 30 December 2016. Official data from the National Payments Corporation of India reports that there were almost 14 million downloads as for 8 February 2017 and news reports suggest it crossed 17 million in late February. NPCI’s (National Payments Corporation of India) data also shows a steady increase in the volume and value of BHIM transactions. This is a promising start.
NPCI’s data also shows a steady increase in the volume and value of BHIM transactions.
Yet, despite all of the hype and the government’s stated commitments to push digitisation, there are some reasons for concern. For one thing, admittedly anecdotal evidence suggests that after an initial adoption of digital payments methods, some smaller merchants and traders are now reverting to cash, since the liquidity crunch has largely eased. By the same token, some consumers are now reverting to cash, which is back in ATMs. The reasons for this are not purely that individuals and merchants would like to keep transactions off the books and out of the tax net, although this is almost surely a factor. Other more mundane reasons include the high frequency of failed transactions, network issues and high transaction costs. Some merchants, for example, complain that they can’t get money out of some payments apps into their bank accounts and meanwhile such payments apps also typically limit the value of transactions in a given month unless customers go through cumbersome and time consuming KYC procedures. This is one advantage of BHIM, which being an official product of the NPCI uses the Unified Payments Interface (UPI). In other words, it’s linked directly to a user’s bank account through the RBI’s regulated payments system and does not rely on third party commercial payment mechanisms.
All these new data I’ve analysed still leaves open the question whether the big jump above trend represents a short-term blip in the aftermath of demonetisation, or whether we’re settling into a new equilibrium with permanently higher shares of digitisation in total transactions.
Analytically, a question we will not be able to answer until we have more data is what share of the current jump above trend represents a permanent shift and what share is a transitory jump only in the wake of demonetisation.
Common sense suggests that part of the uptick in digital payments is likely to represent a long term structural shift. Otherwise, when the cash crunch largely disappeared in January 2017, one ought to have expected a quick return to the pre-Nov. 8 trends. That has not yet happened, holding out the possibility that a basic behavioural change might have occurred. Still, the fact that the data nevertheless show a slow reversion to trend means we will not be able to say anything definitively for several more months if not longer.
The RBI, Ministry of Finance and Prime Minister’s Office will need to be vigilant and continue to monitor the data on digital transactions.
Read the first part ► India is adopting digital payments like never before, but cash too seems here to stay On 16 February, I analysed the Reserve Bank of India’s (RBI) data and presented that the demonetisation exercise initiated by Prime Minister Narendra Modi on 8 November 2016 led to a jump in several measures of digitisation payments […]
Much of the discussion on AI in popular media has been through the prism of job displacement. Analysts however differ widely on the projected impact – a 2016 study by the OECD estimates that 9 percent of jobs will be displaced in the next two years, whereas a 2013 study by Oxford University estimates that job displacement will be 47 percent. The staggering difference illustrates how much the impact of artificial intelligence remains speculative. Responding to the threat of automation on jobs will undoubtedly require revising existing education and skilling paradigms, but at present we also need to consider more fundamental questions about the purposes, values, and accountability of AI machines. Interrogating these first-order concerns will eventually allow for a more systematic and systemic response to the job displacement challenge as well.
First, what purpose do we want to direct AI technologies towards? AI technologies can undoubtedly create tremendous productivity and efficiency gains. AI might also allow us to solve some of the most complex problems of our time. But we need to make political and social choices about the parts of human life in which we want to introduce these technologies, at what cost, and to what end. Technological advancement has resulted in a growth in national incomes and GDP; yet the share of national incomes that have gone to labor has dropped in developing countries. Productivity and efficiency gains are thus not in themselves conclusive indicators on where to deploy AI – rather, we need to consider the distribution of these gains. Productivity gains are also not equally beneficial to all – incumbents with data and computational power will be able to use AI to gain insight and market advantage. Moreover, a bot might be able to make more accurate judgments about worker performance and future employability, but we need to have a more precise handle over the problem that is being addressed by such improved accuracy, and to whose benefit. AI might be able to harness the power of big data to address complex social problems. Arguably however our inability to address these problems has not been a result of incomplete data – for a number of decades now we have had enough data to make reasonable estimates about the appropriate course of action – it is the lack of political will and social and cultural behavioral patterns that have posed obstacles to action, not the lack of data. The purpose of AI in human life must not be merely assumed as obvious, or subsumed under the banner of innovation, but be seen as involving complex social choices that must be steered through political deliberations.
This then leads to a second question about the governance of AI – who should decide where AI is deployed, how should these decisions be made, and on what principles and priorities? Technology companies, particularly those that have the capital to make investments in AI capacities, are leading current discussions predominantly. Eric Horvitz, managing director of the Microsoft Research Lab, launched the One Hundred Year Study on Artificial Intelligence based out of Stanford University. The Stanford report makes the case for industry self-regulation, arguing that ‘ attempts to regulate AI in general would be misguided as there is no clear definition of AI, and the risks and considerations are very different in different domains.’ The White House Office of Science and Technology Policy recently released a report on ‘Preparing for the Future of Artificial Intelligence’, but accorded a minimal role to government as regulator – rather, the question of governance is left to the supposed ideal of innovation – i.e. AI will fuel innovation, which fuel economic growth, and this will eventually benefit society as well. The trouble with such innovation-fuelled self-regulation is that development of AI will be concentrated in those areas in which there is a market opportunity, not necessarily areas that are the most socially beneficial; technology companies are not required to consider issues of long-term planning and the sharing of social benefits, nor can they be held politically and socially accountable.
Earlier this year, a set of principles for Beneficial AI was articulated at the Asilomar conference – the star speakers and panelists were predominantly from large technology companies like Google, Facebook, and Tesla, along side a few notable scientists, economists and philosophers. Notably missing from the list of speakers is government, journalists, and the public and their concerns. The principles make all the right sounds, clustering around the ideas of “beneficial intelligence”, “alignment with human values” and “common good”, but they rest on fundamentally tenuous value questions about what constitutes human benefit – a question that demands much wider and inclusive deliberation, and one that must be led by government for reasons of democratic accountability and representativeness. What is noteworthy about the White House Report in this regard is the attempt to craft a public deliberative process – the report followed five pubic workshops and an Official Request for Information on AI.
The trouble is not only that most of these conversations about the ethics of AI are being led by the technology companies themselves, but also that governments and citizens in the developing world are yet to start such deliberations – they are in some sense the passive recipients of technologies that are being developed in specific geographies but deployed globally. The Stanford report, for example, attempts to define the issues that citizens of a typical North American city will face in computers and robotic systems that mimic human capabilities. Surely these concerns will look very different across much of the globe. The conversation in India has mostly been clustered around issues of jobs and the need for spurring AI-based innovation to accelerate growth and safeguard strategic interests, with almost no public deliberation around broader societal choices.
The concentration of an AI epistemic community in certain geographies and demographics leads to a third key question about how artificially intelligent machines learn and make decisions. As AI becomes involved in high stakes decision-making, we need to understand the processes by which such decision-making takes place. AI consists of a set of complex algorithms built on data sets. These algorithms will tend to reflect the characteristics of the data that they are fed. This then means that inaccurate or incomplete data sets can also result in biased decision making. Such data bias can occur in two ways. First, if the data set is flawed or inaccurately reflects the reality it is supposed to represent. If for example, a system is trained on photos of people that are predominantly white, it will have a harder time recognizing non-white people. This kind of data bias is what led a Google application to tag black people as Gorillas or the Nikon camera software to misread Asian people as blinking. Second, if the process being measured through data collection itself reflects long-standing structural inequality. ProPublica found, for example, that software that was being useful to assess the risk of recidivism in criminals was twice as likely to mistakenly flag black defendants as being at higher risk of committing future crimes. It was also twice as likely to incorrectly flag white defendants as low risk.
What these examples suggest is that AI systems can end up re-producing existing social bias and inequities, contributing towards the further systematic marginalization of certain sections of society. Moreover, these biases can be amplified as they are coded into seemly technical and neutral systems that penetrate across a diversity of daily social practices. It is of course an epistemic fallacy to assume that we can ever have complete data on any social or political phenomena or peoples. Yet, there is an urgent need to improve the quality and breadth of our data sets, as well as investigate any structural biases that might exist in these data – how we would do this is hard enough to imagine, leave alone implement.
The danger that AI will reflect and even exacerbate existing social inequities leads finally to the question of the agency and accountability of AI systems. Algorithms represent much more than code, as they exercise authority on behalf of organizations across various domains and have real and serious consequences in the analog world. However, the difficult question is whether this authority can be considered a form of agency that can be held accountable and culpable. Recent studies suggest for example that algorithmic trading between banks was at least partly responsible for the financial crisis of 2008; the crash of the sterling in 2016 has similarly been linked to a panicky bot-spiral. Recently, both Google and Tesla’s self-driving care caused fatal crashes – in the Tesla case, a man died while using Tesla’s autopilot function. Legal systems across the world are not yet equipped to respond to the issue of culpability in such cases, and the many more that we are yet to imagine. Nor is it clear how AI systems will respond to ethical conundrums like the famous trolley problem, nor the manner in which human-AI interaction on ethical questions will be influenced by cultural differences across societies or time. The question comes down to the legal liability of AI – whether it should be considered a subject or an object.
The trouble with speaking about accountability also stems from the fact AI is intended to be a learning machine. It is this capacity to learn that marks the newness of the current technological era, and this capacity of learning that makes it possible to even speak of AI agency. Yet, machine learning is not a hard science; rather it outcomes are unpredictable and can only be fully known after the fact. Until Google’s app labels a black person as a Gorilla, Google may not even know what the machine has learnt – this leads to an incompleteness problem for political and legal systems that are charged with the governance of AI.
The question of accountability also comes down to one of visibility. Any inherent bias in the data on which an AI machine is programmed is invisible and incomprehensible to most end-users. This inability to review the data reduces the agency and capacity of individuals to resist, even recognize, discriminatory practices that might result from AI. AI technologies thus exercise a form of invisible but pervasive power, which then also obscures the possible points or avenues for resistance. The challenge is to make this power visible and accessible. Companies responsible for these algorithms keep their formulas secret as proprietary information. However, the far-ranging impact of AI technologies necessitates the need for algorithmic transparency, even if it reduces the competitive advantage of companies developing these systems. A profit motive cannot be blindly prioritized, if it comes at the expense of social justice and accountability.
When we talk about AI, we need to talk about jobs – both about the jobs that will be lost and the opportunities that will arise from innovation. But we must also tether these conversations to questions about the purpose, values, accountability and governance of AI. We need to think about the distribution of productivity and efficiency gains and broader questions of social benefit and well-being. Given the various ways in which AI systems exercise power in social contexts, that power needs to be made visible to facilitate conversations about accountability. And responses have to be calibrated through public engagement and democratic deliberation – the ethics and governance questions around AI cannot be left to market forces alone, albeit in the name of innovation. Finally, there is a need to move beyond the universalizing discourse around technology – technologies will be deployed globally and with global impact, but the nature of that impact will be mediated through local political, legal, cultural, and economic systems. There is an urgent need to expand the AI epistemic community beyond the specific geographies in which it is currently clustered, and provide resources and opportunities for broader and more diverse public engagement.
The title of this piece is borrowed from Haruki Murakami, What I Talk About When I talk About Running&p[url]=http://www.digitalpolicy.org/need-talk-talk-artificial-intelligence/&&p[images]=http://www.digitalpolicy.org/wp-content/uploads/2017/03/woman-506322_1920-150x150.jpg" target="_blank">
No longer the subject of science fiction, Artificial Intelligence (AI) is profoundly transforming our daily lives. While computers have been mimicking human intelligence already for some decades using logic and if-then kind of rules, massive increases in computational power are now facilitating the creation of ‘deep learning’ machines i.e. algorithms that permit software to train […]
Under Section 4 of the Act, one requires an approval (in the form of certification) from the CBFC in order to exhibit a film. The CBFC undertakes the process of examining the film and: sanctions it for unrestricted public exhibition (U); sanctions it for public exhibition restricted to adults (A); directs cuts or modification in the film; or refuses to certify the film. The CBFC can refuse to certify the film only if it is against the “interests of the sovereignty and integrity of India, the security of the State, friendly relations with foreign States, public order, decency or morality, or involves defamation or contempt of court or is likely to incite the commission of any offence”. This requirement for approval applies to the exhibition of films produced in India as well as films imported into the country.
Section 3 of the Act, which provides for the establishment of the CBFC, states that the CBFC is set up for the purpose of certifying films which are intended for “public exhibition”. The term “public exhibition” has also been used in Rule 2 (iii) of the Cinematograph Rules, 1983 which defines an applicant under Section 4 of the Act. This means that the Act applies to all films that are intended for public exhibition. However, “public exhibition” is not defined under the Act. Does the term refer to making a film available for watching only in public places such as cinema halls, or making a film available to the public whether for watching in public or in private?
The Delhi High Court, when dealing with this question in the case of Super Cassettes Industries v. Central Board for Film Certification, held:
“Even if there is no audience gathered to watch a film in a cinema hall but there are individuals or families watching a film in the confines of their homes, such viewers would still do it as members of the public and at the point at which they view the film that would be an “exhibition” of such film.”
In this case, the court held that exhibition of films by sale and distribution of DVDs or VCDs would be subject to the Act. Thus, exhibition of films under the Act means exhibition to the public whether for public or private viewing.
Netflix is an online, subscription-based platform which allows us to watch films privately and on demand. Would Netflix therefore be subject to the Act based on this understanding of ‘exhibition’? In order to answer this question we need to find out if the Act applies to public exhibition of films through any medium including cable television networks, DVDs and the internet? I have addressed this question later in the article.
The draft Cinematograph Bill, 2013 defines “exhibition” as “audio or visual dissemination of a film or part thereof or making available a film or part thereto through use of a public medium”. The Bill also defines “public medium” as a “medium, forum or place to which members of the general public have access to with or without the payment of a fee or charge”. Thus, platforms like Netflix – which the public can access to watch films with payment of a subscription amount – would be understood as exhibiting films under the draft Bill. The draft Bill was listed for introduction in the Monsoon Session of the Parliament in 2010. To our knowledge, there has not been any progress on the status of the draft Bill. It is interesting to note that an Expert Committee (chaired by Shyam Benegal) was constituted to formulate recommendations for certification of films by the Central Board of Film Certification (CBFC), and submitted its report on the issue in April 2016. However, the report does not make any reference to the draft Bill and unlike the draft Bill, it has not provided any clarity on the interpretation of the term ‘exhibition’.
It is noteworthy that under the Cable Television Network (Regulation) Act, 1995 (hereafter the ‘Cable TV Act’), cable network operators are required to ensure that the films that can be accessed by their users at home must be certified by the CBFC. Rule 6 (n) of the Cable Television Network (Regulation) Rules, 1994 provides that the content provided by cable network operators to their users must comply with the Act. This suggests that the term ‘exhibition’ of films under the Act has been understood under the Cable TV Act to include exhibition of films for private viewing by the public. As mentioned earlier, with this understanding of ‘exhibition’, films exhibited on Netflix would have to undergo the certification process under the Act.
It would not be right to draw this conclusion without comparing Netflix with different media of exhibiting films – such as VCDs and DVDs, movies on demand provided by cable network operators, internet protocol television channels, and online content-sharing platforms such as YouTube – and examine which medium Netflix closely resembles. It is necessary to do this because different media are regulated by different legislative frameworks.
One can compare Netflix to a cable television network provider (hereafter ‘cable network operator’) as both services allow multiple users to watch films privately at home by payment of a subscription. Cable network operators are regulated under the Cable TV Act and related rules and guidelines. The Cable Network Rules and the Downlinking Guidelines, 2005 formulated under the Cable TV Act, require cable network operators to ensure compliance with the certification requirement under the Act while determining their programme content. Therefore the films that are made available by cable network operators must be only those films that have been certified by the CBFC. Cable television networks are however different from Netflix in the way they operate. This is critical because cable television networks are defined under the Cable TV Act in terms of their operation. They use satellite signals to distribute content to multiple subscribers while Netflix rides on the networks of telecom service providers and internet service providers to provide content to its users. Therefore, the cable television regulation cannot be applied to Netflix.
Now, let us compare Netflix with internet protocol television (IPTV). IPTV is a service which uses the internet for delivery of multimedia content to a customer and makes it available on television, cellular, and mobile TV terminals with STB modules or similar devices. Both Netflix and IPTV use the internet to provide services to their users (which is also referred as streaming) and both are paid services. However, in India, IPTV services can only be provided by registered cable television network operators (because IPTV is treated like cable television service) and telecom service providers and internet service providers who operate under a license given by the Department of Telecommunications (because IPTV is a television service that runs on the internet). As a condition under these licenses, it has been made clear that the content restrictions applicable to cable network operators, as discussed in the earlier section (which includes compliance with the certification requirement under the Act) would apply to IPTV as well. Although Netflix also provides content over the internet (like IPTV) it is treated as an over the top (OTT) service and does not need a license to provide services in India. Since Netflix does not operate under a license, it does not need to comply with the Act unlike IPTV.
Netflix and VCDs/DVDs are similar in that both can be accessed at home to watch films privately on a certain payment of money. The difference between the two is that VCDs/DVDs are sold through a physical medium whereas Netflix uses internet to provide access to films online. Also, the buyer owns a copy of the VCD/DVD containing the film while a user of Netflix does not own the copy of the film that he/she watches on Netflix. The Act does not clearly provide that it regulates the exhibition of films through VCDs/DVDs. The Delhi High Court has however held in the case of Super Cassettes Industries v. Central Board for Film Certification that DVDs come within the purview of the Act. The Court held that under Section 52A (2) (a) of the Copyright Act, one could not make a film which is a cinematograph film available to the public (here by sale/distribution of VCDs/DVDs) that requires certification under the Act unless it was accompanied with a copy of the certificate from CBFC. We know that Netflix makes cinematograph films available to the public using internet. We also understand that the under the Copyright Act, cinematograph film could be film that is available on any medium including internet. Therefore the requirement to display particulars about certification of a cinematograph film as provided under Section 52A(2)(a) of the Copyright Act would also apply to films available on Netflix.
It is also useful to make a comparison between Netflix and paid movies provided by YouTube, an online content (video) sharing platform which allows users to watch films privately using internet by payment of money. Both Netflix and YouTube are OTT services. They do not need prior approval or a license to operate in India and are based outside India. We know that the Act does not clearly provide that it applies to exhibition of films irrespective of any medium. It is however clear that under Section 52A (2)(a) of the Copyright Act, films available to public, in this case on YouTube, must have received certification from CBFC under the Act. In my conversation with the YouTube customer support team for paid content, it was not clear whether, in practice, they only exhibited films which were certified by the CBFC. This then raises several questions about the difficulty in applying the Act to platforms like YouTube and Netflix.
What kind of films can Indian users watch on Netflix? Are these films that are produced and certified in India and made available on Netflix, films that are produced in India but are exclusively available on Netflix, uncertified versions available on Netflix of films that are produced outside India which have been censored by CBFC and released through traditional channels in India, films that are produced outside India that have not been certified or released in India but are available to Indian users on Netflix?
We know that where an applicant is informed that certification of a film by CBFC is contingent upon removal of certain portions of the film, the applicant is required to “remove such portions from the negative of the film and all copies of the film in the possession of the applicant or the laboratory where the film was processed, the distributor, the exhibitor or any other person is required to be surrendered”. Therefore, after a film (whether produced in India or outside India) has been reviewed by CBFC, only the version of the film that has been approved by CBFC would be made available to the public. All other versions in possession of “any other person” must be surrendered under the Act. Releasing versions to the public which are other than the censored and certified versions of such films would be considered illegal under the Act. Recently, in a public interest litigation filed before the Punjab & Haryana High Court against the release of the films Mastizaade and Kya Kool Hain Hum 3, the court directed the producers/directors to submit an undertaking that they would not release the excised portion of the feature/film to anyone in any medium including the internet. It may however prove difficult to regulate exhibition of films on Netflix that are produced and released outside India and films only released on Netflix.
Technology always precedes regulation. This is why for regulation to be effective, it must be technology-agnostic. What we see in this microscopic analysis is an example of a law that is not technology agnostic and thus fails to keep up with new technologies of exhibition of films. The applicability of the Act certainly depends on how we understand the term ‘exhibition’ of a film. Neither the Act nor the relevant case law provides a clear and effective interpretation of this term. There is no legal provision in the Act that states or implies that it is applicable to exhibition of films through the internet. It is however clear that there are separate regulatory frameworks in India for different media of exhibition of films. These frameworks require these media owners or service providers to ensure that they only exhibit CBFC certified films to the public. There is no distinct regulatory framework that ensures that the Act applies to platforms like Netflix.
It is beyond doubt that with the advent of the ‘Netflix era’, the interplay between media content and ownership, service provision and regulatory choices will undergo significant disruption. In the Indian context, the state has unfortunately chosen to deal with this paradigm shift by creating separate regulatory frameworks in an ad hoc manner instead of trying to realise some sort of convergence. This multiplicity of regulatory frameworks comes at a time when the boundaries between these regimes are increasingly difficult to define. Thus, the need to adopt a technology agnostic approach is one of the most central issues that must be addressed in any new reform.
On a separate note, the regulatory uncertainty that the Act has created allows the use of other tools for content regulation online. For instance, the Information Technology Act, 2000 allows the government and the courts to direct ISPs, on whose networks users access Netflix, to either block or take-down films available on Netflix that they consider objectionable. While we know that one could bypass censorship filters and access content that is not available in the country by using free/paid proxy services, this type of content regulation could force players like Netflix to give in to the state’s paternalism and inspire self-censorship to carry out their operations smoothly within the country. That is a dangerous precedent to set in the industry as far as freedom on the internet is concerned.
There is an urgent need to initiate discussions about reform in the present framework(s) for film regulation that addresses technology neutrality and does not stifle the freedom of expression of the film industry and the viewers.
(This essay originally appeared in the third volume of Digital Debates: The CyFy Journal)
 Section 4(1), The Cinematograph Act, 1952 states that “Any person desiring to exhibit any film shall in the prescribed manner make an application to the Board for a certificate in respect thereof.”
 See Section 5B, The Cinematograph Act, 1952. This language has been borrowed from Article 19 (2) of the Constitution of India which imposes reasonable restrictions on freedom of expression.
 Rule 2 (iii), the Cinematograph (Certification) Rules, 1983: “applicant” means a person applying for certification of a film for public exhibition under section 4.
 Super Cassettes Industries v. Central Board for Film Certification, W.P.(C) No. 2543 of 2007
 [Draft] Cinematograph Bill, 2013, available at http://www.prsindia.org/uploads/media/draft/Draft%20Cinematograph%20Bill,%202013-.pdf
 Section 2(c), the Cable Television Network (Regulation) Act, 1995 states that “Cable television network is a system of closed transmission paths and associated signal generation, control and distribution equipment, designed to provide cable service for reception by multiple subscribers.”
 Guidelines For Provisioning of Internet Protocol Television (IPTV) Services, 2006 issued by Ministry of Information and Broadcasting
 Unified License Agreement, Chapter VIII, Provision of IPTV Service, Clause 5.1(d) – The provisions of Programme code and Advertisement code as provided in Cable Television Network (Regulation) Act 1995 and Rules there under shall be applicable…. Since the Licensee will be providing this content, the Licensee shall be responsible for ensuring compliance to the codes with respect to such content. In addition to this, such LICENSEEs will also be bound by various Acts, instructions, directions, guidelines issued by the Central Government from time to time to regulate the contents. 5.1(e) If the contents are being sourced from content providers other than Licensee, then it will be the responsibility of Licensee to ensure that their agreements with such content providers contain appropriate clauses to ensure prior compliance with the Programme and Advertisement Codes and other relevant Indian laws, civil and criminal, regarding content.
 Rule 26, Cinematograph (Certification) Rules, 1983
 Raghav Ohri, Bad news for movie buffs! Censored parts of films to stay out of internet, ET Tech (March 03, 2016) http://tech.economictimes.indiatimes.com/news/internet/bad-news-for-movie-buffs-censored-parts-of-films-to-stay-out-of-internet/51233972&p[url]=http://www.digitalpolicy.org/netflix-is-the-film-censorship-law-there-yet/&&p[images]=http://www.digitalpolicy.org/wp-content/uploads/2017/03/Netflix-150x150.jpg" target="_blank">
Online subscription-based platforms like Netflix that provide video on demand services are becoming increasingly popular among internet users in India who want to watch films sitting at home. The troublesome question is whether these platforms have a legal obligation to show only those films that are certified by the Central Board for Film Certification (CBFC) […]
Digital has had a positive impact on women’s education, skills and therefore, employment openings. In countries where digital access and abilities are more widespread, there is a stronger sense of gender equity. Women who are familiar with the internet also display a strong sense of leadership because they are self-confident and skill-confident. Women want to return to the workforce and are finding new tracks to economic achievement; entrepreneurship is a big part of this.
Companies and governments face a disparity between the skills they need to stay aggressive and the pool of talent available to them. Because women are underrepresented in the workplace in most countries, they are a significant source of untapped talent. The future looks promising as the youth mature and move into the workplace and grow through ranks of leadership at work, using their skills to turn agents of change for their gender.
According to a report by Accenture, “If governments and businesses can double the pace at which women become digitally fluent, we could reach gender equality in the workplace by 2040 in developed nations and by 2060 in developing nations.”
India adds five million connected users every month. These statistics are testimony to the opportunity in India’s internet story. In 2016, the number of mobile internet users in India is above 400 million as per internetlivestats.com. That’s second only to China and ahead of the United States. However, despite these powerful figures, there is gender gap when it comes to access to internet. The internet and Mobile Association of India (IAMAI) report shows that men account for 71 percent of internet users, while women account for just 29 percent. The gap is slightly lower in urban India, with men accounting for 62 percent and women 38 percent. These are the major findings of a report titled “Mobile Internet in India 2015,” released by the IAMAI and the Indian Market Research Bureau International.
Rural India has this ratio completely skewed in favour of men, where they constitute 88 percent of the total internet users. However, these disappointing figures present an opportunity and spell out the possibilities of providing women with new life skills.
While the Make in India initiative has a lion for a mascot, the campaign offers scope to create lionesses out of women. A focus on entrepreneurship over job seeking can make change-makers of Indian women. These entrepreneurial industries can be big and small—operated from home or an office, virtual or on the shop floor.
Ananya Birla runs India’s third largest microfinance outfit called Svatantra. Every other week she is in a village, understanding how her clients—small, unorganised, often self-help groups and women—are utilising their funds. “We call it microfinance 2.0, lending small loans to many people and integrate that with technology,” she explains, on the heels of her return from Amravati in Maharashtra. “When I go down and talk to our clients, they are very happy with these, which is inspiring and refreshing.” Svatantra provides loans to tailors, farmers, housewives etc., depending on their businesses. “Women in these places are very enterprising,” notes Ananya, referring to the female populations in rural India.
There’s Devita Saraf, whose VU Technologies is turning 10 even before she is 35. She sells high-end televisions in India, from Maharashtra to Manipur. “Never underestimate your customer,” she says, citing how India’s appetite for luxury is growing. Saraf’s business has spread to interior India after she took VU online. First, she started selling in big metros but now, her products reach areas such as Sohlapur and Salem. Another female founder, Uma Reddy of Hitech Magnetics in Bangalore, is an electrical engineer running a company that manufactures transformers, feeding India’s heavy industry and defence needs. These are just some examples of women in businesses. In addition to them, there are champions of digital, branding and ideas. There’s Siddhi Karnani of Parvata Foods—primarily responsible for farming and organic produce in Sikkim—producing home-grown spices such as ginger and packaging them for the markets across India. It is evident that the breadth of businesses women are involved in is wide-ranging and impactful.
“Women entrepreneurs have an edge over male entrepreneurs,” says Amitabh Kant, CEO of National Institution for Transforming India (NITI Aayog). He insists that this fact is going to radically change the story of the country’s future and its approach to creating economic value. “They will outperform for several valid reasons. Women leaders in India have a better feel of the household spending patterns. They understand consumer perspective better. They have a way of building trust with customers, shareholders, etc. Also, there is a great level of diversity when women occupy top positions.”
Tech has fundamentally given many the flexibility to work when they want, according to Shikha Sharma, CEO of Axis Bank. It allows people to work from home, without taking away from the productivity, as it saves on costs for some companies and the time and stress involved in travel for employees. “From a workplace perspective, it’s giving women the opportunity to continue to participate in their careers without giving up the other roles as mothers or a daughter or daughter-in-law or whatever.”
The new workplace is far more flexible. Attitudes have changed in the new entrepreneur-led organisation and the rise of digital business that have a start-up-like culture, allowing freedom of ideas, flexibility in terms of place of work and more. “When we started, there were few role models as women who were able to balance work and home,” says Sharma. “For all of us as women, you don’t want to be a loser in your family role. And therefore, it’s a constant question in your mind by becoming a career woman—are you going to compromise on your family?” So having a mentor to go to, whose able to balance that well is very important in your ability to stay down that path. The following question arises: what is the role of peers and do they accept women as equals in the workplace? According to Sharma, “Now we have got to the point that people are recognising having balance in your workforce, having men and women you get different perspectives and you could actually have a richer decision coming through.”
India’s diverse cultures manifest themselves in its 22 languages and thousands of dialects. A new wave of interest in the internet is seen in those who have discovered that the internet should not be limited to English. Rajan Anandan, Google’s Asia Head, mentions the surge of local Indian languages as part of his larger emphasis on rise of content in India. “Hindi internet in India has grown at eight times more than English internet,” he said at the Digital Women Awards in November 2015.
Going forward, there will be more vernacular content as the user base diversifies and grows to include larger numbers of rural consumers. The use of vernacular content online is estimated to increase from 45 percent in 2013 to more than 60 percent in 2018, according to a report by Boston Consulting Group, mirroring broadening consumption patterns in off-line media, such as print and television. English language still accounts for 56 percent of the content on the worldwide web while Indian languages account for less than 0.1 percent. However, although internet in India is predominantly English, there is high potential for regional language content. According to the report, in the last year alone, Hindi content on the web has grown by about 94 percent, whereas English content has grown only at 19 percent. This is a relevant scenario in promoting the inclusiveness of women across the country.
A little known important fact is that most of the unconnected population are women. Not enough of them own mobile phones. In low- and middle-income countries alone, 1.7 billion females do not own a mobile phone today, says a McKinsey Report. For those who do own phones, the internet usage is prohibitively low, and consumption of data and content is less intense. An opportunity is spelt by this gap, which reflects the fact that phone penetration is less ubiquitous in rural regions. Successfully targeting women not only unlocks significant growth potential for the mobile industry but also advances women’s digital and financial inclusion. In fact, closing the gender gap in mobile phone ownership and usage could unlock an estimated $170 billion market opportunity for the mobile industry in the period from 2015 to 2020. Women in South Asia are 38 percent less likely to own a mobile phone. Social media statistics also reflect this disproportion. Less than 30 percent women use social media, purportedly because it’s an unsafe place. Ankhi Das, Policy Head of Facebook, brings an important and real perspective to these figures. “It has a lot to do with access to resources. The deeper normative values we need to look at as a society before saying that it’s only because of safety concerns that people aren’t online. If I as a family have a data plan, and I come from a middle income status, which is subject to certain kind of normative values, and if I have both a son and a daughter, I will give the data plan to the son and not the daughter. I think this is a false binary that safety issues are keeping women from coming online.” According to a United Nations (UN) Women Survey, “Gender barriers are real. One in five women in India and Egypt believes the internet is not ‘appropriate’ for them.”
The most powerful outcome of internet use is the fact that it decentralises work centres and therefore, makes empowerment widespread. India’s growing cities are the hotbed of talent, especially among women. SheThePeople.TV does a monthly workshop with women entrepreneurs who use the internet for their brand or business outreach. In Lucknow, Indore, Jaipur, Pune and many other cities, there is a growing network of women entering the start-up space. Many are turning homes into home-offices, some are catering food for orders made via the internet, several women are selling fashion garments on WhatsAPP and artists and musicians are building pages to extend their reach from Gachibawli to global audiences.
The trends are fascinating. Despite challenging and evolving business cycles, entrepreneurs are reinventing ideas to gratify the needs of the current market. The young generation is open to change—to diversify and go with ideas that will work in the new, demanding environment. There are various factors at play in these mini metros—one of them is the surge of the smartphone usage. There is increasing penetration and enhanced reach as feature phones are populating the cities allowing for higher reach of e-commerce. People want more choice and are hungry for access, fuelling demand that is more pointed. Little wonder then that in Jaipur and Lucknow, SheThePeople documented many entrepreneurs setting up e-commerce centric businesses and internet services. Going online is the first threshold of moving business out of just their hometowns. Women owners are running unique business models—one set up a local food aggregator’s forum, another one a platform that buys failed start-ups and re-pitches to investors after a revamp. In both Jaipur and Lucknow, a large number of entrepreneurs have put together local chains, bakeries and more, with an equally adept online arm. Women entrepreneurs shared insights that tear a new thought away from the stated objectives of scaling up, raising funds, growing big businesses and leveraging significantly. First, entrepreneurs needn’t always think large scale. If they meet the demand and are able to grow their business a few times per that market environment, they are good stead. Not every business needs to be national. Not all entrepreneurs need to multiply before they make money. Many have profits to show and can expand basis internal accruals. Second, most entrepreneurs—some large and established and others who were still ideating—have said that they were not in a rush for funding from investors. There is a thinking that’s emerged that start-up owners can grow ideas faster if it’s cash positive and allows for the same to churn the next cycle. Are women taking this approach because it’s practical and keeps risks at bay? Many women asserted that they were reluctant to leverage someone else’s money, leading them to opt for commerce businesses that have high margin products that allowed them to make money on every sale.
The proliferation of misogyny via trolls on the internet speaks volumes about the ways in which the wider, global online environment may in itself be hostile towards women. In India, too, the internet has brought about a great degree of vulnerability, despite being a tool that is designed to empower. The threats come in the form of cyber-crime, trolling, harassment and sometimes physical abuse.
Women receive far more social media abuse than their male counterparts and the intensity of the abuse is higher. Nitin Pai of Takshashila Institute in Bangalore says, “There is a contest between narratives of prejudice and tradition. Whichever way you cut it—political or ideological—women are at the receiving end. If any of these narratives—conservatism, prejudice, tradition—win, women are at the losing end. It’s important for women to stand up and take this stance much more than men.” He notes how trolling attracts audience as the drama plays out for all the entire spectators.
For an action to qualify as violence—as illustrated through the UN Declaration’s emphasis on ‘psychological harm or suffering’—physical proximity and contact is not a necessary condition. While forms of violence change with the medium through which it is carried out, it continues in its new and multiple digitised avatars.
We can put a phone in a woman’s hand, but how does that empowerment play out into her real life? A male dominated society continues to ostracise efforts by women to stand on their feet and take charge.
A government survey shows that almost 79 percent of the women establishments are self-financed. Women entrepreneurs find it easier to turn to family to start a business with money that already belongs to them.
We need policies that are holistic for women. On the economic front, many states and the centre have talked about funds to support female founders. However, most of the procedures to access those funds are complex and tedious. The government on its part is trying to simplify the process, but the fact remains that for the administration, start-ups are a new story and they too face a steep learning curve. There are women centric funds that have come up but these are not sufficient to cover the entire canvas of new ideas that are emerging. Traditional investors, on the other hand, are mostly chasing valuation driven stories.
A few years ago, the government had mooted the idea of the Bharatiya Mahila Bank. Now, it is being merged with State Bank of India. Why has it not been grown as an independent bank? How does a merger help women for whom this was to be a go-to place for funding? This is the big question: how do we set up a framework? We don’t need just one, but many policy moves to create sufficient outlets of funding and loans for women.
Policy challenges also remain, with respect to getting more women online or preventing those already in the internet universe from retreating. Social media trolling and sexual abuse are making the internet a tough place for women. Recently, the Women and Child Development Minister Maneka Gandhi said that the government would take action again trolls. However, there remains ambiguity as to how this would be implemented and whether this is a policy decision or a knee-jerk reaction.
Could the condition of women’s economy be an answer to India’s growth-stickiness? Could this be the one factor that goes beyond public spending in infrastructure? Is it time to go beyond a gender-neutral approach to recognising and rewarding efforts by women towards building the new economy?
The Industrial Revolution was one of the big turning points in economic history because it brought economic identity, empowerment and wealth to people. However, the beneficiaries were mostly men. Women only received by-product benefits from those economic returns. The revolution cut down distances, created shop floors, but it didn’t collapse any societal gender gaps.
However, with this digital revolution, women can lead the way. Not only can they contribute by being a force of growth and wealth, they can use it to shatter the glass ceiling of archaic workspaces and build their own success stories. This new context and construct of the new age and internet-dependent India we live in, there is a massive shift towards self-start companies and risk-hungry “digital and dot” projects. There is a shift from being employed to being the employer. Women are at the centre of this. One merely has to browse through Twitter or Facebook to find stories of successful women leading businesses from e-commerce to content companies. In India, there are two million SMEs registered on Facebook, and a big chunk of those are women-led businesses.
Harnessing the power of women could change the growth matrix, says a KPMG report. “Given the current economic scenario, some of the key national imperatives to propel India into the next wave of growth include creating employment opportunities for special segments such as women workforce.”
Women are at the heart of the country’s manufacturing, digital and service boom. “Making in India” is, simply, putting an idea to work.
(This essay originally appeared in the third volume of Digital Debates: The CyFy Journal)
Accenture, “Narrowing the Gap,” https://www.accenture.com/in-en/gender-equality-research-2016
 Internet Live Stats, “India Internet User,” http://www.internetlivestats.com/internet-users/india/ (Accessed on August 8, 2016)
 Shaili Chopra, With gender parity India’s economic growth can get a boost by 27%, DNA (August 15, 2016) http://www.dnaindia.com/money/report-celebrating-india-s-independence-with-women-taking-the-lead-2245164.
 Sadaf Vasgare, From heading PepsiCo to the State Bank of India, these women don’t only rule their homes but the boardrooms of some leading companies, DNA (May 8, 2016) http://www.dnaindia.com/money/report-five-powerful-indian-mothers-in-business-2209997.
 Indo-Asian News Service, Over 1.7 Billion Women in Emerging Economies Do Not Own Mobiles Phones: GSMA, NDTV Gadgets (March 4, 2015) http://gadgets.ndtv.com/mobiles/news/over-17-billion-women-in-emerging-economies-do-not-own-mobiles-phones-gsma-667123
 Anja Kovacs, Richa Kaul Padte and Shobha SV, ‘Don’t Let it Stand!’ An Exploratory Study of Women and Verbal Online Abuse in India, Internet Democracy Project , April 2013, https://internetdemocracy.in/wp-content/uploads/2013/12/Internet-Democracy-Project-Women-and-Online-Abuse.pdf
All India Report of Sixth Economic Census, Government of India, 2012, http://mospi.nic.in/Mospi_New/upload/census_2012/AIR6EC_main.html&p[url]=http://www.digitalpolicy.org/gap-empowering-women-shatter-glass-ceiling/&&p[images]=http://www.digitalpolicy.org/wp-content/uploads/2017/03/Gap-150x150.jpg" target="_blank">
By on 3 March, 2017
For women, the internet hasn’t been a feature, a convenience or a tool; it’s been an agent of change. The global digital story cannot be complete without mentioning the impact it has had on reducing the gender gap and opening access and opportunity for women. Digital fluency has helped in closing the gender gap at […]
Coercive cyber measures, like any military option, should be the culmination of extensive assessments by India of its intelligence and technical capabilities. Take as two possible targets, the Hub Power Station in Karachi and the Karachi (now Pakistan) Stock Exchange. The Hubco plant is among the largest thermal power-generating projects in Pakistan, capable of “provid[ing] 10+% of [the] country’s electricity demand”. The KSE (now Pakistan Stock Exchange) is its premium financial trading platform. To mount a cyber attack against either installation, military planners should be supported by intelligence inputs from the ground, providing valuable information about:
Both require an assessment of the installation that goes well beyond aerial or satellite reconnaissance. Without strengthening India’s intelligence networks in Pakistan, therefore, a serious attack on its digital networks will be difficult to conceive or execute.
Then there is the matter of the ‘cyber weapon’ itself. Not many government agencies in India, including the National Technical Research Organisation, have the in-house expertise required to build and exploit vulnerabilities that can manipulate or destroy the integrity of electronic data. India’s armed forces fare marginally better, having deployed ‘red teams’ that do penetration testing to protect their own networks. But the military too may not be in a position to create a sophisticated cyber-weapon designed for the specific purpose of bringing down, say, Pakistan’s electricity grid.
It is worth remembering that Stuxnet was the product of an inter-agency effort involving the United States and Israel. Stuxnet owes its origins in no small part to the United States’ well-developed bug bounty programme, which invites hackers to identify vulnerabilities in operating systems and communications platforms. Having a bug bounty programme (which in the US is tightly regulated by the White House) contributes to a strategic culture that can co-opt technical expertise in India into the national security narrative. There is no reason why New Delhi should shy away from a programme for its defence and intelligence agencies, given the talented pool of computer scientists in the country. In fact, internet giants like Facebook and Google routinely rely on Indian citizens to identify fixes and flaws in their products through their own bug bounty schemes. Today, Indian agencies rely on private expertise on an ad hoc basis, or buy zero-day vulnerabilities from the ‘dark net’.
An evaluation of coercive cyber measures against Pakistan by the NSA – the last step in the chain of decision-making before it is presented as a credible option before the Prime Minister – can be done only if he is able to lean on multi-agency coordination that will supply both human intelligence and technical expertise.
The tail, however, should not wag the dog. Conceiving and creating a cyber weapon will likely involve months, but this process should be guided by a political strategy as to its specific objective, likely impact, and potential fallout. Unlike conventional weapons or weapons of mass destruction, it is impossible to create an ‘arsenal’ of cyber weapons that can be deployed at will.
The first step for India’s defence planners, then, would be to absorb coercive cyber measures as a central pillar of its Pakistan policy. This would involve:
Cyber attacks are difficult to attribute to governments, as they often originate from non-state actors and sometimes, through servers based in a third country. Links between non-state actors and the governments of the territory in which they are based can at best be established using circumstantial evidence. In India’s case, military planners need to walk a fine line between denying any involvement in a cyber attack, and signalling to Islamabad that its so-called ‘asymmetric’ actions will be met by similar responses. Were New Delhi to be implicated in a coercive cyber manoeuvre against Pakistan, Indian diplomats should be prepared to defend the legality of its conduct in multilateral venues like the United Nations.
In essence, India’s legal defence against a cyber attack on Pakistan would be to claim an act of reprisal. Given the UN’s visible lack of enthusiasm in enacting a Comprehensive Convention on International Terrorism, India will have to rely on traditional principles of state responsibility to hold Pakistan responsible for the actions of groups like the Jaish-e-Mohammed and Lashkar-e-Taiba. Without wading into the vast and rich jurisprudence on the subject, it is sufficient to say that even if India should produce evidence linking terrorist groups to the Pakistani government, it may be difficult to satisfy purely legal requirements.
Article 8 of the draft articles on Responsibility of States for Intentionally Wrongful Acts states:
“The conduct of a person or group of persons shall be considered an act of a State under international law if the person or group of persons is in fact acting on the instructions of, or under the direction or control of, that State in carrying out the conduct.” (emphasis added)
The ‘direction/control’ test is a high standard to which India or the international community may never hold Pakistan. The first hurdle for India is to meet this threshold, absent a ‘smoking gun’.
The second (and related) difficulty is to establish that attacks by terrorists are not only attributable to Pakistan but that they also violate a prohibition on the “use of force” enshrined in Article 2(4) of the UN Charter. If that seems incredulous, there’s more. For India to claim “self-defence” in international law under Article 51 of the Charter, attacks such as the one in Uri should constitute an “armed attack” by the Pakistani state, a legal threshold that is generally accepted to be higher than the plain “use of force”.
In the aftermath of the 9/11 attacks, the United States invoked its “clear right of self defence” under Article 51 to bomb Afghanistan — a decision that polarised international opinion on the legality of its claim. In that instance, however, the US had the overwhelming support of the UN Security Council, which subsequently legitimised the intervention through the establishment of the International Security Assistance Force in 2001. In India’s case, no such support from UNSC members will be forthcoming. In any case, New Delhi has no appetite for an armed intervention of the scale seen in Afghanistan.
Simply put, it is improbable that India can convincingly make the case for “self-defence” through a cyber attack against Pakistan. Reprisals on the other hand involve the use of force, but need not be reported to the UN Security Council, and constitute an act akin to self-defence for attacks of a lesser degree.
Amidst this legalese, it is important not to miss the larger, political picture. For India to offer a convincing defence of retaliatory cyber measures against Pakistan requires coordinated planning between the Ministry of External Affairs (MEA) and the National Security Council Secretariat. Irrespective of what New Delhi may term its actions, the cyber attack should be a proportionate response to Pakistan’s transgressions. The MEA and its lawyers should advise the NSA on this count and thoroughly review the cyber weapon’s impact on civilian populations. To help mould the evolving body of international law in its favour, India must also step up engagement with international platforms such as the UN Group of Governmental Experts on ICT security and the Tallinn Manual consultations on the law of armed conflict in cyberspace.
Coercive cyber measures offer some advantages to a policy planner where conventional military options appear limited, as in India’s case against Pakistan. Nevertheless, several concerns persist, which should prompt New Delhi to examine the desirability of this option.
The lesson here, perhaps, is that a declared doctrine on the use of cyber weapons, pursuant to the building of capacities, can signal deterrence to Pakistan more effectively than the use of such instruments in isolation by India. It will likely take years to bring such a strategy to fruition: after the May 1998 tests, it took India nearly 5 years to articulate a nuclear weapons doctrine. The rapid advancement of digital technologies suggests that a cyber doctrine, if articulated, should be flexible, and open to review and possible restatements. Pakistan’s nuclear weapons capability is often cited as a dead-end for India’s conventional superiority, but cyberspace opens a new theatre of conflict. But it is critical this process begins now, failing which India could be drawn towards an inevitable confrontation in digital spaces with Pakistan without a clear assessment of its goals or outcomes.
(This essay originally appeared in the third volume of Digital Debates: The CyFy Journal)
 “Our Business”, Hub Power Station, accessed September 1, 2016, http://www.hubpower.com/our-business/hub-power-station/#CC
 N. Perlroth and D.E. Sanger, “Nations Buying as Hackers Sell Flaws in Computer Code”, The New York Times, July 13, 2016, accessed August 30, 2016, http://www.nytimes.com/2013/07/14/world/europe/nations-buying-as-hackers-sell-computer-flaws.html
 “Draft articles on Responsibility of States for Internationally Wrongful Acts, with commentaries”, United Nations, accessed September 5, 2016, http://legal.un.org/ilc/texts/instruments/english/commentaries/9_6_2001.pdf
 See D. Bowett, Reprisals Involving Recourse to Armed Force, The American Journal of International Law. Vol 66(1), pp. 1-36, http://heinonline.org/HOL/LandingPage?handle=hein.journals/ajil66&div=5&id=&page=
 “Bush announces opening of attacks”, CNN.com, October 7, 2001, accessed August 29, 2016, http://edition.cnn.com/2001/US/10/07/ret.attack.bush/
 J.A. Green, The Article 51 Reporting Requirement for Self-Defense Actions (2015), Virginia Journal of International Law. Vol 55(3) http://www.vjil.org/assets/pdfs/vol55/VJIL_55.3_Green_FINAL.pdf&p[url]=http://www.digitalpolicy.org/missing-option-india-pakistan-armed-conflict-cyberspace/&&p[images]=http://www.digitalpolicy.org/wp-content/uploads/2017/03/Missing-Option-150x150.jpg" target="_blank">
By on 2 March, 2017
Likely to be among the options weighed by India’s National Security Advisor (NSA) in response to Pakistan’s alleged complicity in the Uri terrorist attack of September 18, 2016 is coercive cyber action. In theory, a cyber attack could be swift, minimise the risks of causalities, offer plausible deniability and could likely inflict serious damage on […]
We do not yet know the extent of any possible fiscal windfall as the Reserve Bank of India has not released official figures on what portion of the approximately ₹15 trillion of demonetised notes were returned. Likewise, we do not yet know the extent to which depositors of illicit cash will be discovered and penalised by tax authorities.
The other remaining major rationale for demonetisation has been that it will push the economy toward greater digitalisation and formalisation as a direct consequence of the temporary cash crunch especially in the immediate aftermath of old notes ceasing to be legal tender.
While it’s not in dispute that increased digitalisation in particular is a laudable goal, what remains untested is the extent to which this actually happened. Unfortunately, much analysis has focused exclusively on data relating to the post-demonetisation period to try to draw conclusions. Thus it’s now become a refrain that after rising in December 2016, various measures of digital transactions fell back again in January 2017. For example, as Livemint wrote, “[this shows] that phenomenal growth in use of digital payments methods is clearly unsustainable once liquidity crisis eases. Where are we, three months after Modi’s demonetisation move?” (9 February 2017)
This is simply poor data analysis as it neglects a much longer data series on digitalisation predating 8 November 2016. Using data on the value of Point of Sale, Debit and Credit Card (PoS) transactions going back to April 2011, the earliest for which data is readily available, and the value of Immediate Payment Systems (IMPS) from September 2012, again the earliest for which data is available, I trace the dynamics of these two important components of digital payments. Crucially, unlike treatments which look at the raw data, my analysis accounts for the fact that the nominal money stock has grown substantially over this period. Basic economic theory suggests, as the nominal stock of money grows, its components will also grow roughly in proportion at least as a first approximation. That is why it’s necessary to adjust the raw data by adjusting it relative to the money stock.
Thus, my baseline specification adjusts both PoS and IMPS by the value of M3, a broad measure of the money stock. Charts 1 and 2 depict scatter plots between PoS and IMPS respectively and M3, with all variables converted to natural logarithms which better enables tracking change over time. It’s clear from eyeballing these charts that there is a very tight relationship between both these measures of digitalisation and the overall stock of money which in turn is correlated with nominal GDP. In other words, M3 helps us well predict the evolution of both PoS and IMPS. In particular, both scatter plots show a positive or upward sloping relationship, which means that even when adjusted for the stock of money, both PoS and IMPS have been increasing in importance with the latter showing an especially sharp increase since being introduced.
This is reconfirmed through regression analysis, which shows a tight relationship between both PoS and IMPS respectively and M3. Both regressions show very high predictive power (R-squared well over 90 percent in both cases) and very low standard errors. For PoS, I compute an elasticity of just over 2, which means that a one percent increase in M3 is correlated with approximately a two percent increase in PoS (Note that this omits the last three months which only reports data on four banks). For IMPS, the elasticity is a very high 35, meaning a one percent increase in M3 is associated with a whopping 35 percent increase in IMPS. This high number must be seen in light of the recent introduction of IMPS suggesting we’re still in an early adoption phase featuring rapid growth. By contrast, PoS has been around much longer and is slowly but steadily gaining in importance.
The sharp uptick in December 2016 and the sharp drop in January 2017 are clearly outliers from the broader trend, obviously driven by the immediate short-run impact of 8 November. In fact, compared to our model, the jump in December was so large, that even though it fell back in January it’s still above the trend. In particular, for PoS, December was about 13 percent higher than the model predicts and January was more than six percent higher than predicted.
The crux is that those who claim that digitalisation fell in January by looking at only two or three data points completely miss the fact that this is still more than we would have had if demonetisation hadn’t occurred! The narrative of a sharp rise and then drop in digitalisation is simply misleading and based on poor data analysis. Even a cursory analysis of long term trends shows we’re still above trend, due to demonetisation.
In addition, for analytical clarity, it’s necessary to distinguish two categories which are often conflated: digitalisation and the move towards a cashless economy. The analysis presented thus far clearly supports the narrative of increasing digitalisation. But the reality is that cash remains central to the Indian economy and likely will remain so for sometime to come. To check the importance of cash in the economy, I conduct a similar data analysis comparing the natural logarithm of Government Currency Liabilities to the Public (GCLP), a component of the money supply to the natural logarithm of the overall money supply itself. Chart 3 depicts a scatter plot of this relationship, which again is a tight relationship and upward sloping, meaning that cash in the hands of the public is growing as the overall stock of money is growing.
A regression analysis (again with high R-squared and low standard errors) confirms this, giving us an elasticity of government cash liabilities as against the money supply of about 2. This means that a one percent increase in the money supply has over the last five years led to approximately a two percent increase in the amount of cash in the hands of the public. In other words, while digitalisation has proceeded, cash has not become unimportant. It’s necessary to stress that this trend for cash incorporates a couple of months of data following demonetisation. If demonetisation indeed leads to less cash in the economy, we should see this elasticity fall over time.
Will a one percent rise in M3 correlate to a bigger than two percent increase in PoS going forward. That is, will the elasticity eventually and permanently increase? We simply won’t be able to answer this question until we have more data.
One thing is clear: If the Ministry of Finance (MoF) and the RBI are serious about promoting the shift to digitalisation, they’re going to need to increase the incentives and reduce the costs to average citizens, everything from transactions fees to poor internet connectivity which leads to transactions failings. The push towards digitalisation requires getting the basics right. Having paid the price of considerable disruption to the economy, it’s imperative that both MoF and RBI push ahead with the digitalisation drive to reap the full long-term benefits. As my research shows, digitalisation in particular has been above trend since demonetisation. Both MoF and RBI must track if this is a permanent and structural shift in behaviour. If it is, despite the initial pain, demonetisation may yet yield the long-term gains that Prime Minister Modi has spoken about.
This article originally appeared in ORFONLINE.ORG.&p[url]=http://www.digitalpolicy.org/india-is-adopting-digital-payments-like-never-before-but-cash-too-seems-here-to-stay/&&p[images]=http://www.digitalpolicy.org/wp-content/uploads/2017/02/13782423384_153446fb1c_o-1-150x150.jpg" target="_blank">
By on 20 February, 2017
More than three months have passed since the announcement by Prime Minister Narendra Modi on 8 November 2016, that high denomination notes were being demonetised. There has been much debate on whether the government has achieved its stated goals. We do not yet know the extent of any possible fiscal windfall as the Reserve Bank […]
Following the publication of India’s draft National Encryption Policy (NEP) in September 2015—which has since been withdrawn—debates have ensued on questions related to the regulation of encryption technologies across telecommunications services, over-the-top Internet intermediaries, and voice-over IP services. On 12 August 2016, the Observer Research Foundation organised a multistakeholder discussion on encryption, bringing together representatives from the government, civil society, and industry and trade associations, as well as global internet companies. The consultation sought to generate a comprehensive dialogue on the future of encryption, in the hope that future engagements on encryption will prove to be more inclusive of the various stakeholders who play a crucial role in the management of the country’s contemporary digital ecosystem.
The multistakeholder panel discussion arrived at some key conclusions. First, that it is in the best interest of India’s digital economy to strive for best-in-class encryption norms that can rival those of other leading data protection nations such as Israel, the United States and Germany. Second, that the concerns of encryption policies need to go beyond raw economic considerations, and must also take into consideration the end-user privacy of individual digital consumers. Third, all stakeholders (governments or otherwise) who are to be involved in the evolution of encryption practices in India should acknowledge that encryption standards and technologies are difficult to engage solely at the policy level. As a result, those charged with approaching the issue of encryption may consider moving away from conventional State-centric regulatory practices, and instead allow their regulatory initiative to embody a true multistakeholder—and perhaps, even autonomous—form. Finally, there is a need for a more concerted effort to understand the inherent benefits of encryption, as well as for increased transparency in the rationale given by the government for increased state involvement in the calibration of encryption standards.
This report begins with a discussion of the details of the now-withdrawn draft NEP. The report will then outline various comments that were provided by the stakeholders present at the consultation with regards to how the Indian government can work with civil society, members of the national judiciary, as well as the private sector in order to design a more comprehensive approach to regulating encryption.
The NEP stipulated a hybrid licensing regime requiring suppliers of encryption technologies and platforms providing encrypted communication channels to deposit their decryption keys with the Indian communications regulator. This requirement complements Clause 37.1 of the Unified License Agreement that explicitly forbids bulk encryption on the licensee’s communications networks. The NEP also required for the storage of encrypted messages in plaintext form for 90 days in case law enforcement personnel would need access to the contents of said messages during criminal investigations. Many elements of the NEP drew heavily from both the US Communications Assistance for Law Enforcement Act and the British Regulation of Investigatory Powers Act.
What was therefore being proposed was a hybrid model combining elements of both key escrows as well as backdoors to encrypted devices. The policy mandated that every encryption vendor or service provider operating within the Union of India would need to provide the government with working copies of the software and hardware that have been used for encrypting communications. In the event that law enforcement personnel would require access to a specific communications device, the service provider would be obligated to provide a backdoor through which said law enforcement agency could access the device of interest.
The three principal weaknesses with the NEP that were identified by the stakeholders were as follows:
First, key escrows and the centralisation of encryption keys (as was being stipulated within the NEP) leaves encrypted conversations vulnerable to malicious attacks. The skepticism towards using key escrows in contemporary digital systems stems primarily from two historical precedents: a) debates that took place in the 1990s when the Clinton administration debated the “Clipper Chip” escrow mechanism in the United States; and b) debates within the Council of Europe during the drafting of the Budapest Convention on Cyber Crime during the early 2000s. A recent report by Abelson et al. addresses the dangers of key localisation, and subsequently makes it clear that the provision of backdoors within encrypted communications networks will jeopardise the integrity of entire communications systems.
Second, the conditions outlined in the NEP raised concerns amongst stakeholders on whether or not the State would misuse its newfound powers over encryption technologies in order to expand surveillance programmes. This question of governmental misuse was of particular concern to civil society representatives who flagged it in the context of ongoing debates around the ‘Right to Privacy’, whose parameters are not fully defined. In addition to this, there currently exists no formal legal test for determining when a data requisition order can be justified on the basis of a ‘national security’ concern—a justification that is often offered by security agencies in cases of protracted and unwarranted data surveillance.
Third, concerns were also raised with regards to the lack of judicial oversight stipulated within the NEP. Article 84(A) of the Indian Information Technology Act, 2000 vests the government with the remit to regulate encryption, but the NEP should not, as a result, be twice removed from adequate judicial oversight. Several stakeholders made it clear that future encryption consultations would need to look at whether the final regulation takes the form of a legislation or executive policy, with a view to ensure direct parliamentary oversight of the matter.
The participants at ORF’s discussion made it clear that any future attempt at regulating encryption would need to take into consideration the diverse agendas and interests of the various stakeholders involved in the management of India’s ICT ecosystem.
First of all, it is clear that encryption needs to be maintained as a normative best governance practice, wherein the security of certain types of communications and transactions needs to be guaranteed. This pertains to the agendas of all of the major stakeholders. The digital economy can only prosper if the commercial transactions taking place within the Union are secure. Regulatory certainty and stability will also prove to be a vital consideration in future efforts to make India a hub for international digital infrastructure. Simultaneously, any encryption reform will also have to respect privacy and free speech—values that India has already pledged itself to protect through its participation in international instruments.
The ORF roundtable also revealed that more discussions need to be conducted to determine which institution or regulatory body is going to be given the mandate of setting technical/encryption standards and protocols. Should the process of setting standards be a bottom-up multistakeholder process, or will it be top-down, by which the government defines certain guidelines on the kind of standards that are to be maintained by communication platforms and professional suppliers of encryption technologies?
If encryption standards are going to be specified, then the appropriate regulatory authority must articulate in a clear and transparent manner the reasons for specifying such a threshold. The debate on key lengths and encryption ‘thresholds’ is not conclusive: at the workshop, some panelists suggested that India’s encryption policy should, at a minimum, mandate the 128-bit encryption suggested by the Reserve Bank of India and the Securities and Exchange Board of India. Others suggested that key lengths should be voluntarily adopted by the sector in line with its specific requirements. A recommendation that was favoured by many stakeholders at the consultation was to discard the concept of a maximum encryption threshold altogether, and to simply mandate minimum thresholds/ floors instead (particularly on communications implicating government actors). This would institutionalize the use of encryption in Indian digital governance practices.
The governmental agencies responsible for re-drafting the new encryption policy will also need to articulate why an encryption policy needs to be mandated in the first place, and what governmental considerations have subsequently been injected into the design of a newly reformed NEP. In this regard, both civil society as well as private sector stakeholders made it clear that any new encryption mandate must respect the concept of proportionality: encryption regulation must be justified in relation to the concrete objectives that it seeks to achieve. The Indian government should become more forthcoming with regards to what kind of data (note: not specific requests per se) it is demanding through its data requisition protocols. Companies to whom data requisition orders are issued can also play an important part in raising civil society’s broader understanding of the kind of data that the government wishes to access. This could be achieved, for example, through the publishing of annual corporate transparency reports that would clearly aggregate and classify all data requisition orders that have been delivered to a company by the Indian government.
Given the fact that there currently exists no constitutionally codified ‘Right to Privacy’ within Indian law, many stakeholders suggested that the encryption debate in India may indeed be premature. They raised concerns over whether an encryption policy would be effective without formal legal safeguards for the protection of individual privacy.
An agreeable resolution to the encryption debate in India should acknowledge and balance the concerns of all stakeholders. India’s digital economy can only be as robust as the measures that are in place to protect the data and transactions flowing through its networks. Businesses and consumers operating within the Indian digital economy need to be assured of the integrity and authenticity of their data, and encryption plays a major role in fulfilling these requirements. As India’s digital markets prosper, they are likely to attract attacks of increasing sophistication, both from state and non-state actors, necessitating the creation of secure and encrypted networks. At the same time, law enforcement agencies in India will come under increasing strain to investigate and prosecute cyber-crimes, as well as ‘offline’ criminal activities coordinated through digital networks. Their capacity to enforce the rule of law is critical to the digital economy’s smooth functioning and in maintaining public confidence in safe and accessible digital spaces. To the extent that pervasive encryption technologies are implicated in the process of accessing electronic data, the concerns of LEAs must also be taken into account.
Whether this ‘resolution’ of India’s encryption debate takes place through a policy is another question altogether. Any encryption policy articulated by the Indian government should be mindful of the reality that technological developments are likely to outpace such regulations in a matter of years. Higher encryption standards, implemented first by the private sector and gradually absorbed by public-sector undertakings, can become the norm without having black letter regulations in place. Similarly, the capacity of law enforcement agencies to tackle crimes online is not strictly related to the encryption standards prescribed by the government. Given that internet users in India will increasingly rely on communication platforms and financial gateways built in the United States and Europe, law enforcement agencies have no option but to strengthen their ability to retrieve data from foreign companies – with or without an encryption policy in place.
In India, conditions for lawful access to data are prescribed under Section 69 of the Information Technology Act, 2000 and rules made pursuant to this section. Given this legislative ability, some have wondered if it is within the remit of an encryption policy to prescribe access to electronic data. The mandate of an encryption policy, the argument goes, must only be limited to setting the modes and medium for encryption. It must not duplicate or circumvent due procedure prescribed under Indian statutes.
Regulators, therefore, are faced with three distinct choices: draft a ‘future-proof’ encryption policy that mandates the highest possible encryption standards, proceeding on the assumption that the ability of India’s law enforcement agencies will grow commensurately; draft a policy that mandates low encryption standards for devices and products currently available in the market, with a view to intercepting their contents; defer the drafting of an encryption policy with the understanding that such policies cannot keep pace with evolving technologies.
Of the three, having a token policy with low standards of encryption is the least attractive option for the Indian government, businesses and consumers alike. While it may facilitate for LEAs the access that they need for crime investigations/ prosecutions, mandating lower encryption standards will only pull down the overall security of the ICT ecosystem. Not only would such a policy discourage cyber security innovation and the introduction of state-of-the-art devices into the digital economy, it will render Indian users vulnerable to attacks by malicious actors.
In a growing market like India, where a majority of encryption platforms originate internationally, there is some merit in pursuing a hands-off approach towards an encryption policy. As mentioned previously, most providers of popular encryption platforms currently originate from the Silicon Valley and this trend is unlikely to change in the next decade. The attractiveness of such platforms, and the Indian user’s unhindered access to them is partly responsible for the country’s rapidly growing rate of internet penetration. Encryption policies should not set the clock back on such growth.
If, however, India seeks to develop a domestic market for encryption products and services, it can ensure that technological development is guided by regulation. This can only be based on an assessment of the overall consumption and trends in use of encryption services by Indian users and businesses. In the United States and United Kingdom, for example, encryption technologies have evolved rapidly with a concurrent enhancement in the interception capabilities of law enforcement agencies. This is not the case in India, and indeed the rest of Asia, Africa and Latin America. Were the government to consider implementing an encryption policy, policymakers should also consider the possibility that these regulations may be emulated by other emerging markets in the future.
This encryption policy must back advancements in technology, and follow international best practices. It must be complemented by strengthening the capacity of law enforcement agencies through the streamlining of processes under Mutual Legal Assistance Treaties and negotiating data-sharing agreements with countries that handle the bulk of Indian data.
What would some of the most important aspects of such a policy look like?
Mandating 256-bit (or higher) encryption to secure government-to-government communications.
These recommendations, however, must also keep in mind concerns of the industry with regard to setting limits on key lengths for encryption. It was suggested during the ORF roundtable that from the perspective of an internet business, setting both minimum and maximum key lengths can be counterproductive. The strength of encryption required varies greatly from one industry sector to the next. It was argued that the sheer diversity of considerations ranging from specific security needs, product design, compatibility, performance, and other variables, make it difficult for a one-size-fits-all approach to be effective. For instance, specifying a minimum level of encryption may hinder a business’ ability to detect malware or filter spam.
A strong encryption policy can upgrade the overall standard of security in cyberspace, enhance free speech and stimulate e-commerce. It can also encourage domestic research and development in cyber security and cryptographic tools. This will not only address the technology deficit that Indian law enforcement suffers from but also provide them with the much needed human resources to assist with cyber-related crimes. It can also buttress India’s data protection norms which not only increases trust within the country but also makes the Indian market a more attractive destination for international trade. As more companies situate their data in India – not on account of data localisation policies, but because it is financially attractive to set up base in the country – the menu of policy options available to LEAs to investigate and prosecute cyber crimes will also expand simultaneously. But the encryption policy should not be tied down by the complex bilateral and multilateral conversations around electronic data sharing.
(This multistakeholder consultation was supported by grants from the William and Flora Hewlett Foundation and Google Inc.)
Samir Saran – Vice President, Observer Research Foundation
Arun Mohan Sukumar – Head, Cyber Initiative, Observer Research Foundation
Gulshan Rai – Coordinator, National Cyber Security, Prime Minister’s Office, Government of India
Ankhi Das – Director of Public Policy for Facebook – India, South and Central Asia
Bedavyasa Mohanty – Junior Fellow, Observer Research Foundation
Prasanna – Data Security Consultant, GigSky
Subho Ray – President, Internet and Mobile Association of India
Karuna Nundy – Lawyer, Supreme Court of India
 Key escrow is the process by which governments have every encryption service or provider retain a decryption key with a third party. That third party can then provide the decryption key to the government for investigation purposes.
 Harold Abelson, et al. “Keys Under Doormats: Mandating Insecurity by Requiring Government Access to All Data and Communications,” Communications of the ACM, 58:10 (2015).
 Reserve Bank of India, Report and Recommendations of the Working Group on Information Security, Electronic Banking, Technology Risk Management and Cyber Frauds (2011) available at http://cab.org.in/IT%20Documents/WREB210111.pdf (Accessed September 3, 2016)
 Securities and Exchange Board of India, Report of the Committee on Internet-Based Securities, Trading and Services (2000) available at http://184.108.40.206/RRCD/oDoc/29-nettrading_200059.pdf (Accessed September 3, 2016)
This article originally appeared in ORF Special Report 29
By on 18 February, 2017
On August 12, 2016 the Observer Research Foundation convened the first in a series of multistakeholder roundtables on encryption. This report is the outcome of the discussion of issues and proposal of solutions conducted at the roundtable. Being a complex, technical-legal question around access to data for law enforcement, encryption has long been a contested […]
Over the past couple of years, India has registered rapid economic growth, with the GDP growing 7.6 percent in the last fiscal year. The country’s economic profile has also witnessed a shift over a long period—from rural-based agricultural production to urban economic activities, and from low-value manufacturing to high-value services. The economy is on track to maintain its growth rate for 2016. While economic activity remains buoyant, the country still has a long way to go. The Modi government must capitalise on the current economic momentum and use it to accelerate its reform agenda.
One of the areas requiring regulatory attention is the property market. After all, a robust property rights system is a prerequisite for sustained economic growth. Secure tenure has been shown to lead to greater land market efficiency, ensure greater access to formal channels of credit, incentivise investment in physical and human capital, strengthen growth performance, reduce macroeconomic volatility, and encourage equitable and efficient distribution of economic opportunity.
The world is currently in the midst of a post-industrial digital revolution—epitomised by information intensity, connectivity, specialisation, and globalisation. This new technological era has the potential to enable an ecosystem in which society is motivated by collaborative interests rather than individual gain. To harvest the benefits of this technology-centric paradigm, the Modi government launched the ambitious Digital India Mission in 2014, whose cornerstone is the provision of reliable and current data to facilitate efficient delivery of government services. Yet, the information architecture that houses this data currently resides in a set of disparate databases. The current system still allows for the alteration or manipulation of data with relative ease. This problem is especially pervasive in the case of land records. The centralised control over the land records and registration systems in the States offers minimal transparency and accountability. As a result, critical data is often unavailable, leading to inordinate delays in real-time decision-making. Poor land record-keeping makes buying land difficult, leading, in turn, to delays in infrastructure projects. Land is often acquired for development projects, but the 7/12 land extract (an extract from the land register maintained by the revenue department) does not reflect these changes. Fraudulent land transactions are rampant as a result of this administrative deficiency. In certain cases, people mortgage government-acquired properties to obtain bank loans. Though a central government program to digitise and update land records was relaunched in 2016—the Digital India Land Records Modernization Program (DILRMP)—it is still left open to similar iniquities.
India must institute a standardised property rights regime if it aims to be an economic powerhouse. To bolster current systems, a decentralised, open, and transparent method of record-keeping needs to be introduced. This must be supplemented by a legal framework capable of guaranteeing and enforcing property rights. A possible solution to the current record-keeping conundrum lies in blockchain technology.
The analytical discussion of the economic benefits of land titling and registration has evolved—from theoretical discussion and descriptive statistics to a discussion based on increasingly rigorous quantitative analyses. Significant efforts have been made by researchers across the globe to quantify the economic benefits of secure ownership, in general, and land registration systems in particular. For example, a study in India found that land registration leads to significant interest payment savings. The quantification of economic impacts is relevant to policy-makers, as it helps demarcate potentially worthy targets for public spending.
The impact of land registration on investment has also been thoroughly examined in many parts of the world. In Costa Rica, for example, it was found that a correlation existed between the degree of tenure security and farm investment per unit of land. Meanwhile, in Thailand, land titling was found to stimulate land transactions, and in Indonesia, it was established that higher security of tenure led to higher land prices.
Peruvian economist, Hernando De Soto, is among the leading proponents of the merits of secure tenure. He argues that the lack of a formal property rights system is the root cause of poverty in developing nations and is responsible for the proliferation of informal real estate and employment sectors. He coined the term, ‘dead capital’, to refer to assets that cannot be easily transacted, valued or used for investment. Slums are a good example of dead capital, as their residents are scarcely able to realise the economic potential of the land they live on. Through empirical studies, De Soto calculated that the total value of dead capital in the Global South is around USD 9.3 trillion. He says that with the introduction of a formal property regime, the poor can begin to look at their assets as more than just shelter, and begin to leverage property to gain access to credit and grow their business. It is important to note that De Soto is currently assisting Georgia’s effort to develop a blockchain land registry.
Before discussing how blockchain technology can effect better land administration outcomes, it is important to understand the key features of India’s land administration system. A distinction exists between land records and land registration and the way in which these recordings have evolved historically. Land records were introduced swiftly in India after colonisation, in rural areas with agricultural potential. Forests and urban areas were excluded from the purview of this recording system. Land records aimed not to document rights but to collect taxes. Land revenue became the main source of government income throughout the colonial period. The system of registration of documents concerning transfers of immovable property was first introduced by the Bengal Regulation XXXVI of 1793, the Bombay Regulation IV of 1802, and the Madras Regulation XVII of 1802. The Acts authorised the Registrar to register sale deeds, gift deeds, mortgage deeds, wills, and leases. These Acts were followed by a series of enactments which culminated in the Registration Act of 1908. The Registration Act applied to all British Indian provinces, providing for the registration of all documents related to the transfer of immovable property. Things remained largely the same after Independence. The newly independent Indian regime kept the colonial system for land records and land registration largely intact. Rural areas depend primarily on land records maintained by the Revenue Department, whereas in urban settlements, people are more reliant on registration of deeds through the Stamps and Registration Department. Since Independence, the general assessment of land reforms in the Indian context has been underwhelming. According to a report by the erstwhile Planning Commission’s Task Force on Agrarian Relations, the large gaps between policy and legislation, and between the law and its implementation, are to be blamed on general political insouciance.
Under the Constitution, States in India are vested with the responsibility of maintaining land records. Specifically, entries 18 and 45 in List II of the Seventh Schedule clearly state that Land and all allied activities come under the domain of the States. At the national level, the Department of Land Resources in the Union Ministry of Rural Development has the mandate to address land policy issues. In the States, the Revenue Department manages land records, the Department of Stamps and Registration oversees registration, and the Survey Department carries out land surveys. At the taluk or tehsil level, the officer in charge of maintenance of land records is the Tehsildar. At the circle level, land records are in the custody of the Revenue Inspector and Circle Inspector. The Village Accountant (VA) is usually in charge of a single village or a group of villages. The VA rounds out the sub-stratum of land administration system.
During the Seventh Plan in 1987-88, a centrally sponsored scheme on the Computerization of Land Records was introduced as a pilot project in certain districts across the country. The scheme, however, made little progress. Some of the operational problems included a delay in the development of a need-based software, poor computer training facilities for field revenue staff, a dearth of private contractors to update data, and a general lack of administrative focus.
In 2008 the National Land Records Modernization Programme was launched, aiming to modernise the management of land records, minimise the scope of land/property disputes, foster transparency in the land records maintenance system, and gradually move towards guaranteed conclusive titles to immovable property. Under this initiative, land records are supposed to be computerised in all districts of the country by 2017 using a Public-Private-Partnership model. The scheme did not make much headway as the costs of implementation proved a challenge.
Recently, the digitisation of land records was relaunched under the Digital India Land Records Modernization Programme, for which the 2016 Budget allocated INR 150 crores. The relaunched project is slated to include the following: computerisation of all land records including mutations; digitisation of maps and integration of textual and spatial data; survey/re-survey and updating of all survey and settlement records, including creation of original cadastral records wherever necessary; computerisation of registration and its integration with the land records maintenance system; development of core Geospatial Information System (GIS); and capacity building. Overall, this scheme is a positive first step towards creating a robust formal property system. There are, however, some issues with its implementation.
The records maintained by the authorities are primarily used for fiscal purposes. The function of providing proof of title is purely ancillary to the purpose of collecting land revenue. Thus, titles to land are purely presumptive. Registration only puts an agreement between two parties on public notice but says nothing about the legal validity of the underlying transaction. This leaves titles secured through registration open to challenge in the courts of law. The Indian judicial system is currently buckling under the weight of a three-crore case backlog, 70 percent of which pertain to disputes regarding land or property. Registrars will register any instrument received without checking its validity in the absence of countervailing claims. The Registration Act, 1908 does not require vetting the validity of documents and transactions.
Due to a lack of coordination between the various nodal agencies handling land records, the information registered is not standardised. This leads to ambiguity in terms of the nature of rights being transferred by the transaction and the boundaries of the land being transacted. Further, records are not updated promptly. Thus, they rarely reflect the true nature of ownership of a particular parcel of land. A report of the CAG showed that there was a backlog of some 124,325 cases for registration of property in 2015.
The current system is rife with corruption. Experts estimate that each year, USD 700 million in bribes are being exchanged at registrar offices across the country. The system has also led to the proliferation of an informal credit sector. Most poor farmers in India, due to a lack of formal title to their land, cannot use it as collateral against a credit transaction. As a result, formal credit institutions are inaccessible to most farmers, leaving them at the mercy of informal moneylenders.
The DILRMP is still heavily reliant on government functionaries to act as trusted third parties to process deeds and verify data. This leaves the system vulnerable to inefficiency and iniquity. Recent data from the DILRMP show that in most States, the digital land record database has not been integrated with the digitised land registration database. This deficiency of data hinders the seamless verification of documents submitted for registration. VAs and patwaris tend to display apathy towards cross-referencing and verifying data, resulting in innumerable delays. This system is also vulnerable to cyber-attacks. A cyber-attack on a digital land registry could result in the loss or theft of important data.
The idea of the ‘blockchain’ first came about in 2008, when a person or a group of people going by the pseudonymous ‘Satoshi Nakamoto’ published a paper detailing the workings of a peer-to-peer electronic cash system that dis-intermediated financial institutions: Bitcoin. The blockchain was the technology underpinning the Bitcoin electronic cash system. Bitcoin was the currency of choice for nefarious networks like the infamous ‘Silk Road’ – an online marketplace for people peddling contraband. Since then, however, blockchain has attained a new identity in enterprise. Many financial institutions and firms across industries are experimenting with this technology as a secure and transparent way to digitally track the ownership of assets and the verifiability of transactions. Some countries are now turning to blockchain technologies to address inefficiencies in current systems and increase the effectiveness of public service delivery.
A blockchain is a data structure that makes it possible to create a digital record of transactions and share it across a distributed network of computers. Through cryptography, each participant on the network may manipulate the ledger in a safe way without the need for a central authority.
Each block comprises a unique hash. The transaction is distilled down to a code known as the hash value. A hash is the digital fingerprint of a particular transaction. Each computer in the blockchain’s network is called a node. Each node has a copy of the entire ledger and works with other nodes to maintain the ledger’s consistency. This creates a redundancy in the system. If any node disappears or goes down, all is not lost. A consensus mechanism is a set of rules the network uses to verify each transaction and agree on the current state of the blockchain.
Once a block of data is recorded on the blockchain ledger, it becomes extremely difficult to change or move. If someone wants to add to the blockchain, participants in the network – all of whom have copies of the existing blockchain – run algorithms to evaluate and verify the proposed transaction. The transaction’s hash has to match the blockchain’s history. If the majority of the nodes reach a consensus as to the transaction’s validity, then it will be approved and added to the ledger. Blockchain is a transparent platform, the workings of which are open to examination and elaboration.
Particular attention is being paid to how blockchains can be used for registries. A blockchain is an instrument that ensures veracity, making it the perfect recording system for anything worth tracking closely. At present, blockchain technology is being used to track diamond transactions, act as a notary, and store personal information in a way that does away with passwords. Though this mechanism still relies on an intermediary to store data accurately, there are ways to circumscribe its ability to influence the nature of the data such as through ‘smart contracts’.
Smart contracts are contracts that have been distilled into code. Smart contracts enable adding supplementary information to what is already stored in the blockchain to regulate data authorisation and storage. They self-verify their conditions using data and then execute themselves. They are tamper-resistant because they are run and stored on a network of computers that is beyond the influence of the contract’s participants. Since none of the contract’s participants can influence the smart contract beyond the actual performance of their obligations, all the participants can trust that this type of contract will be executed as it is written.
At present, Estonia, Honduras, Georgia, Ghana and Sweden are looking into blockchain-based land registry systems. Sweden is leading this initiative with a working pilot in place that is looking at full-scale deployment soon. A study of existing systems could shed light on which system would be best for India to adopt.
Sweden’s current land registration system lacks transparency and efficiency. As a result, private stakeholders developed a system to ensure the sanctity of agreements amongst themselves. Understanding the need for change, the Lantmateriet (The Swedish Mapping, Cadastre and Land Registration Authority) collaborated with Kairos Future (a consultancy), The Telia Company (Sweden’s dominant tele-network operator), and ChromaWay (a blockchain solutions firm) to develop an innovative way to address the issues plaguing the current land registry framework. They devised a plan to create an application that would use blockchain technology to facilitate transactions. Communication between the various stakeholders (real estate agent, bank, buyer, seller, and the Lantmateriet) is conducted over the application. All information about the property (current owner, cadastral surveys, among others) is digitised and put into the blockchain. Smart contracts then ensure that this digitised space is regulated by certain rules (i.e., Sweden’s regulatory policies). The application is then used as an interface to facilitate all transactions concerning a particular property. The purchase agreement is distilled down to a unique hash code and put into the blockchain. Banks, real estate agents, buyers and the Lantmateriet can substantiate the veracity of this purchase agreement and other documents through their unique digital signature (hash on the blockchain). Banks can also ensure that the buyer has enough funds in their account to carry out the transaction. The Lantmateriet can then register and grant title to the buyer. This project is currently in the pilot phase.
The beauty of this program is that it allows for lateral operations. Linear progressions are generally slow and tedious. Here many different procedures, activities, and formalities can be carried out simultaneously through a single, open and transparent platform.
Before a system like this is introduced in India, certain prerequisites will have to be met. The following are the key enablers to drive implementation of blockchain technology in India:
These systems will provide the foundation for the successful implementation of a blockchain land registry in India.
The first step in establishing a robust PKI in India would be to consolidate the current institutional infrastructure governing land registration and titles. This entails the creation of a single department to manage registration, a record of rights and cadastral surveys. Consolidation of these three areas would facilitate greater coordination. It would also lead to an increase in accountability.
Cost-effectiveness: Although initial implementation costs would be high, the blockchain provides a way of combining many processes and systems. This would increase efficiency through distributed processing and thus reduce long-term costs, such as a reduction in manpower of the concerned department.
Efficiency: The use of smartphones as one-stop-shop for all property-related transactions will significantly reduce the inefficiency of the current system. Most significantly, it will drastically cut down the number of intermediaries that deluge the current title regime. The tamper-proof nature of the blockchain also helps curb corruption as patwaris will not be able to go back and change land records in exchange for a bribe.
Transparency: Registration on the blockchain would mean that the information in the registry is completely available to the public. The CAG can be brought onto the platform as a stakeholder so that its office can view transactions and information uploads as they happen. If someone tries to tamper with the information, everyone can identify tampering at any point in time. While blockchains are not entirely hack-proof, protocols can be put in place to counter such attacks on the system. It will also make it impossible to transact land fraudulently.
Easing administrative burden: Land/property-related disputes currently make up 70 percent of the total case backlog in India. A robust land title system will lead to a decrease in the number of land-related disputes in the country and, in turn, lessen the backlog in the country’s courts.
Rajasthan’s legislative assembly recently passed the Rajasthan Urban Land (Certification of Titles) Act, 2016, making it the ideal location for a pilot blockchain project. The Act sets up the legal framework for granting legitimate rights to a property owner in the State. The Act provides that if a third person were to successfully challenge the title of a transaction between two parties, the government would have to ensure compensation for the buyer against the payment made. It also clearly defines the period within which people can dispute provisional certificates of title to property. The State also has a functional website which provides copies of the current Records of Rights. The tehsil with the highest per capita usage of mobile phones would be the ideal location to launch the pilot project.
Deployment should commence with re-surveying the land. At present, most cadastral maps in India date back to the British era. The Department of Land Resources is tapping agencies that provide Unmanned Aerial Vehicle (UAV) technology services to carry out the cadastral mapping process. Once concrete land parcels are established, all newly registered property (those with no formal titles or claims to them) can be registered directly on the blockchain. Title to other plots will have to be established through the issuance of a provisional certificate, and if the property is disputed, from the outcome of the dispute. To ensure quicker resolution of disputes, the Act can be amended to mandate that disputes under the Act be resolved within a year.
A flat rate transaction tax can be used to replace Stamp Duty on land transactions. This tax could be charged on the application itself – incorporated within the cost of the land parcel – and transferred directly to the concerned Government Department. While this does not do away with Stamp Duty altogether, it eliminates the process of paying Stamp Duty to a collector—and the uncertainty that often accompanies it. Due to the Stamp Duty Act’s ambiguous nature, there is often a disagreement between the Collector and the person trying to pay duty on the amount of duty owed. Collectors have also been known to use their position to harass the public. Thus, the payment of Stamp Duty in itself is a precipitator for a lot of litigation. Rates vary across States and the Stamp Duty Act itself is a non-comprehensive, tedious piece of legislation. Incorporating the tax directly within the transaction amount would also prevent tax evasion.
There are a number of impediments to the implementation of a Blockchain land registry in India. For one, it requires extensive legislative and administrative overhaul, starting with an amendment of the Constitution and the introduction of a comprehensive Titling Act. It will also require a reconfiguration of the administrative regime currently overseeing land records. Overcoming these structural barriers will be arduous, to say the least. However, the blockchain ledger would be toothless without legislative support. Entries in the ledger would still only provide for presumptive titles under the current legal framework and thus a conclusive land titling law must be passed to mitigate future disputes.
There are also a number of infrastructural deficits. At present, only 34.8 percent of the Indian population has access to the internet. This lack of access, along with poor literacy rates and abject poverty, act as major roadblocks to expanding the reach of any digital application. And though mobile data plans are priced competitively, they are still too expensive for a majority of the Indian people. A positive development in this regard is the ambitious Pruthvi project. Saankhya Labs in Bengaluru has developed a chip that uses television White Space, or wasted spectrum bandwidth, to supply internet to scores of rural households. The project is currently in the field-trial stage.
The new property regime would considerably cut down the role of government intermediaries in facilitating property transactions. The company partnering with the government on this initiative will be responsible for uploading the initial data on the blockchain. The blockchain can be programmed to establish the veracity of this data through the use of smart contracts. Smart contracts will ensure that transactions and titles adhere to the policies and regulations put in place by the government. They can also be programmed to access court registries when disputes are resolved and titles granted, cadastral surveys and other public records to determine the accuracy of titles. Intermediaries, then, are only to observe the network and raise the alarm if there is any evidence of tampering. Even if they graduate to entering the data themselves, they cannot cheat the system as it will not accept a change that does not match the information in the public records.
Despite the sizable barriers to implementation and adoption, the current government’s fervor for digitisation helps maintain an optimistic outlook towards the future of this technology. If the government can help overcome infrastructural, institutional and regulatory hurdles, the blockchain land registry, coupled with robust titling legislation, may yet prove to be the best way to deliver a secure, transparent and efficient land records system in India. It will help expunge many of the deficiencies inundating the current system. As stated earlier, the technology offers an innovative solution to a variety of issues plaguing the country. And while it is by no means a panacea, it carries advantages that far outweigh any negatives that may ensue from its implementation.
 Worstall, Tim. “India’s Economic Growth Up To 7.9% Of GDP For Quarter, 7.6% For The Year.” Forbes Magazine, May 31, 2016. http://www.forbes.com/sites/timworstall/2016/05/31/indias-economic-growth-up-to-7-9-of-gdp-for-quarter-7-6-for-the-year/#533f4cd54777.
 Mallick, Jayanta. “Study finds shift from agriculture in rural areas.” The Hindu Business Line, August 31, 2015. http://www.thehindubusinessline.com/economy/study-finds-shift-from-agriculture-in-rural-areas/article7600374.ece.
 Locke, Anne. Property rights and development briefing: Property rights and economic Growth. Issue brief. Overseas Development Institute. August 2013. https://www.odi.org/sites/odi.org.uk/files/odi-assets/publications-opinion-files/8513.pdf.
 For more information see Benkler, Yochai. The Wealth of Networks: How Social Production Transforms Markets. Yale University Press, 2013.
 Retrieved from Digital India Land Records Modernization Programme. http://nlrmp.nic.in/.
 Behera, Hari Charan. “Constraints in Land Record Computerisation.” Economic and Political Weekly 44, no. 25 (June 20, 2006): 21-24. http://www.jstor.org/stable/40279233
 Padmanabhan, Ananth. “Data route to transparency.” The Indian Express, August 12, 2016. http://indianexpress.com/article/opinion/columns/data-route-to-transparency-blockchain-technology-2969093/.
 Akhaury, Vanita . “Digitisation of land records: How the relaunched programme will help overcome property crimes and frauds.” Firstpost, April 1, 2016. http://www.firstpost.com/business/digitisation-of-land-records-how-the-relaunched-programme-will-help-overcome-property-crimes-and-frauds-2708226.html.
 Deninger, Klaus, and Gershon Feder. “Land registration, governance, and development: evidence and implications for policy.” World Bank Research Observer 24, no. 2 (2009): 233-66. https://openknowledge.worldbank.org/bitstream/handle/10986/4430/wbro_24_2_233.pdf?sequence=1&isAllowed=y.
 Lemel, Harold. “Land titling in Costa Rica: a legal and economic survey.” Land Use Policy 5, no. 3 (July 1988): 273-90. http://www.sciencedirect.com/science/article/pii/026483778890035X.
 Feder, G. (1987). Land registration and titling from an economist’s perspective: a case study in rural Thailand. Survey Review 29. 163-174
 Dowall, D.E., and Leaf, M. (1990). The price of land for housing in Jakarta: an analysis of the effects of location, urban infrastructure and tenure onz residential plot prices. Prepared for the Regional Housing and Urban Development Office, USAID, Berkeley, CA. Department of City and Regional Planning, University of California.
 Soto, Hernando De. The mystery of capital: why capitalism triumphs in the West and fails everywhere else. New York: Basic Books, 2000.
 Shin, Laura . “Republic Of Georgia To Pilot Land Titling On Blockchain With Economist Hernando De Soto.” Forbes Magazine, April 21, 2016. http://www.forbes.com/sites/laurashin/2016/04/21/republic-of-georgia-to-pilot-land-titling-on-blockchain-with-economist-hernando-de-soto-bitfury/#883989d65500.
 Bannerjee, Abhijit, and Lakshmi Iyer. “History, institutions, and economic performance: the legacy of colonial land tenure systems in India.” The American Economic Review 95, no. 4 (September 2005): 1190-213. http://links.jstor.org/sici?sici=0002-8282%28200509%2995%3A4%3C1190%3AHIAEPT%3E2.0.CO%3B2-8.
 Setalvad, M. C. Sixth Report on the Registration Act, 1908. Report no. Six. Law Commission of India. http://lawcommissionofindia.nic.in/1-50/report6.pdf.
 Ibid., 3.
 Deninger, Klaus, and Aparajita Goyal. “Going digital: Credit effects of land registry computerization in India.” Journal of Development Economics 99 (March 3, 2012): 236-43. March 3, 2012. http://www.sciencedirect.com/science/article/pii/S0304387812000181.
 Sarangi, U. C. Report of The Task Force on Credit Related Issues of Farmers. Report. Ministry of Agriculture, National Bank for Agriculture and Rural Development. http://econpapers.repec.org/paper/esswpaper/id_3a3951.htm.
 Deshpande, R. S. Emerging Issues in Land Policy. Issue brief no. 16. Asian Development Bank, India Resident Mission. New Delhi: Asian Development Bank, 2007. 1-15.
 Daksh. “What kinds of cases are litigants filing? .” Map. Daksh India. http://dakshindia.org/access-to-justice-survey-results/index.html.
 The Registration Act, 1908, § 17 (1908).
 The Land Registry in the blockchain . Report. ChromaWay. 2016.
 Offline and falling behind: Barriers to Internet adoption. Report. Technology, Media and Telecom, Mckinsey and Company. October 2014. http://www.mckinsey.com/industries/high-tech/our-insights/offline-and-falling-behind-barriers-to-internet-adoption.
 Murali, Anand . “Pruthvi, a chip, can connect India’s rural population to the internet.” The Economic Times , October 19, 2015. http://economictimes.indiatimes.com/tech/internet/pruthvi-a-chip-can-connect-indias-rural-population-to-the-internet/articleshow/49445899.cms.
Distributed Ledger Technology: beyond block chain. Report. UK Government Chief Scientific Adviser, Government Office for Science. https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/492972/gs-16-1-distributed-ledger-technology.pdf.
Rajasthan Urban Land (Certification of Titles) Act, 2016 (2016).
The Constitution of India.
SmartContract + Factom Announce Collaboration. (n.d.). Retrieved from https://www.factom.com/blog/smartcontract-factom-announce-collaboration
Swan, Melanie. Blockchain: blueprint for a new economy. O’Reilly Media, 2015.
Wadhwa, D. C. “Guaranteeing title to land the only sensible solution.” Economic and Political Weekly 37, no. 47 (November 29, 2002): 4699-722. https://www.jstor.org/stable/4412872?seq=1#page_scan_tab_contents.
This article originally appeared in ORF Occasional Paper 105&p[url]=http://www.digitalpolicy.org/securing-property-rights-in-india-through-distributed-ledger-technology/&&p[images]=http://www.digitalpolicy.org/wp-content/uploads/2017/02/Analysis_Property-rights_Meghna-Bal-150x150.jpg" target="_blank">
By on 15 February, 2017
India registered rapid economic growth over the past couple of years, with the GDP growing 7.6 percent in 2015-2016. While economic activity remains buoyant, however, the country still has a long way to go. The government must capitalise on the current economic momentum and use it to accelerate its reform agenda. One of the areas […]
Individuals, civil society organisations, and governments are investing tremendous energy and money in cyberspace, transforming innumerable aspects of peoples’ daily lives. Cyberspace has also had a transformative impact on the evolution of all sorts of conflicts. Just as the French Revolution (1789-1799) saw a “democratization of communications, an increase in public access, a sharp reduction in cost, a growth in frequency, and an exploitation of images to construct a mobilizing narrative”, today’s internet-driven technological revolution has led to a phenomenal growth in connectivity while giving individuals and small groups disproportionate power. Audrey Kurth Cronin argues that “blogs are today’s revolutionary pamphlets, websites are the new dailies, and list serves are today’s broadsides”.
Describing the characteristics of a fast evolving ‘network society’, Manuel Castells, a renowned thinker on communication and information society, argues that “conflicts of our time are fought by networked social actors aiming to reach their constituencies and target audiences through the decisive switch to multimedia communication networks”. John Mackinlay has contended that changes in mass communications have deeply affected the nature of insurgencies in which physical space has been rendered less important. With the rise of a “deterritorialised” state, insurgents are capable of using propaganda crafted and disseminated from distant locations. Mackinlay writes that “the techniques of an insurgency evolve with the societies from which it arises…if the communications revolution has given birth to global communities and global movements, so too can it herald a form of insurgent energy that is de-territorialised and globally connected.” It is clear that insurgencies are being shaped by cyberspace, shifting the centre of gravity from the physical world to the ‘virtual’ domain or cyberspace. Noted security expert, Bill Gertz, has similarly argued in his latest book, iWar: War and Peace in the Information Age, that “warfare in the twenty-first century will be dominated by information operations: nonkinetic conflict waged in the digital realm”.
Concern over ‘communications strategies’, ‘network society’, ‘information operations’ and other variations on propaganda reflects Castells’ point. According to political scientist, Joseph Nye, “In an information age, communications strategies become more important, and outcomes are shaped not merely by whose army wins but also by whose story wins. In the fight against terrorism, for example, it is essential to have a narrative that appeals to the mainstream and prevents its recruitment by radicals. In the battle against insurgencies, kinetic military force must be accompanied by soft power instruments that help to win over the hearts and minds (shape the preferences) of the majority of the population”. Echoing Nye’s words, Britain’s former Chief of Defence Staff, General David Richards had contended that “Conflict today, especially because so much of it is effectively fought through the medium of the communications revolution, is principally about and for people – hearts and minds on a mass scale.” As triggering a conflict through cyberspace can be low-cost and potentially devastating in impact, insurgents and terrorists throughout the world have come to rely heavily on cyber mobilisation, which is designed to conduct psychological warfare, to propagandise the success of insurgents and counter the claims of governmental agencies, to recruit, finance, and train more fighters to the cause. These factors are compelling counterinsurgents to turn their attention to the cyber environment. There is still, however, a great deal of debate about how insurgency can be waged in the cyberspace. Counterinsurgency experts would ask whether it is simply an ‘old wine’ in a ‘new bottle’ or an arena for a completely new dimension that has not existed before.
To answer this question, it is important to understand terms such as the ‘virtual ummah’, ‘digital umma’, and ‘Dar al-Cyber Islam’, which highlight how Muslims have created transnational Internet communities in cyberspace beyond the geographically limited institutions. Even though personal blogs and other social-media tools allow Muslims to represent themselves anew, it also helps terror groups such as al-Qaeda and the Islamic State (IS) to subvert traditional religious authorities by propagating interpretations of jihad that disseminate radical worldviews. Al-Qaeda has been heralded as the first terrorist organisation to revolutionise its operations by exploiting the Internet. The high level of sophistication achieved by cyber jihadists was illustrated in a manual issued by an al-Qaeda mouthpiece, the Global Islamic Media Front, which provided a detailed guide for creating Internet proxies in order to ensure anonymity. It has also been speculated that ‘digihad’ has “made it easier for potential Jihadis to reinforce their worldview without ever leaving home” and to get “eventually introduced to extremist ideology”.
The outgoing US President Barack Obama had once observed: “Social media and the Internet is the primary way in which these terrorist organizations are communicating. Now, that’s no different than anybody else, but they’re good at it.” Obama was simply repeating what former US Secretary of Defense, Robert Gates, had said as early as in 2007: “It is just plain embarrassing that al-Qaeda is better at communicating its message on the internet than America.” With the emergence of IS, not only the world of jihadism has been transformed but the jihadists have also begun to innovatively use the Internet for galvanising support for their cause.
IS has become a master at social media communication, from content creation to dissemination. It regularly uploads videos, with content that is easy to understand and practical, increasing its potential to become viral. While al-Qaeda was more dependent on the Internet, IS has shifted to the use of social media, prompting outgoing US Secretary of Defense, Ashton Carter, to term IS as “the first social media–fueled terrorist group”.According to Gulshan Rai, India’s National Cyber Security Coordinator in the Prime Minister’s Office, more than 70 percent of terrorists and terror groups across the world are using various cyber tools like Voice over Internet Telecom, social media and encryption to spread their vision and objectives.
The Kashmir conflict has evolved dramatically and traumatically since the tragic partition of the Indian subcontinent in 1947. Three factors are primarily responsible for the enduring character of the conflict: Pakistan’s obsession with Kashmir, India’s colossal political mismanagement of Jammu and Kashmir, and the emergence of a radical Islamist ideology in the state.
Pakistan has constantly challenged India’s territorial rights over Kashmir. Its strategic objective of dividing India and controlling Kashmir has shaped its policy of supporting terrorism against India. Having failed to defeat India through conventional military means, Pakistan’s security establishment, since the late 1980s, has been supporting and financing the insurgency in Kashmir. Cross-border infiltration from Pakistan has complemented the insurgency being waged by local actors. It has also sought to exploit the Kashmiri people’s growing dissatisfaction with the Indian state.
India’s own inept handling of the insurgency has further worsened the situation in Kashmir. A hollow rhetoric can never be a substitute for a successful Kashmir policy. The former chief of India’s Research and Analysis Wing (RAW), A S Dulat, has candidly written in his memoirs that “we in India wasted so many years in containing the Kashmir militancy. And once it was contained, we sat back and were happy with the status quo, instead of taking advantage of the situation to forge a political solution.”
The Kashmir issue also has its roots in radical Islam. The ideological aspect of the Kashmir insurgency should not be underestimated nor should it be reduced entirely to an ‘accidental guerrilla’ syndrome, where local grievances need to be addressed. The newly resurgent Kashmiri insurgency has local roots, but there are concerns that it could get internationalised owing to links with transnational jihadist groups. The Kashmiri youth, who are joining the militancy, are drawing inspiration from such groups.
The Indian Army’s doctrine in 2006 defines insurgency as “an organised armed struggle by a section of the population against the state, usually with foreign support. Possible causes of an insurgency including ideological, ethnic or linguistic differences; or politico-socio-economic reasons and/or fundamentalism and extremism. Interference by external forces may act as a catalyst to provide impetus to the movement.” As compared to insurgency, a terrorist act is defined by Indian law as an activity that is carried out “with intent to threaten or likely to threaten the unity, integrity, security, economic security or sovereignty of India or with intent to strike terror or likely to strike terror in the people or any section of the people in India or in any foreign country”. Despite the difference between the two, terrorism is often used as an instrument by an insurgency, as has been the case in Kashmir.
A lasting solution to the Kashmir problem in today’s polarised political atmosphere seems remote, but further repression of an already angry and alienated people will only create more problems. After years of enduring violence, mutual distrust, communal polarisation and Pakistani interference, secessionist sentiments are now firmly entrenched in Kashmiri society. The misuse of social media by terrorists and insurgents has led to further radicalisation of the people, posing greater challenges for the Indian State.
The Kashmiri youth have frequently opposed the presence of the Indian State during times of social unrest. The virulent protests of 2010 and the subsequent State crackdown caused much anger, resentment and widespread anti-India feelings in the Kashmir Valley. In the recent years, there has been a new surge in the incidence of Kashmiri youth taking up arms against the Indian establishment. The new phase of unrest that unfolded in 2016 is qualitatively different from previous scenarios in terms of intensity, scale, and the nature of mobilisation. Violent incidents and fatalities in the Kashmir Valley have shown a substantial increase in this new phase.
The ‘Azadi’ slogan is gaining popularity among the Kashmiri youth, mobilising them in large numbers. No single factor can be attributed to this phenomenon. Like all Indians, Kashmiris also have the right to protest in a peaceful and non-violent manner. In a democratic society like India, Kashmiris cannot be expected to remain in constant fear of laws like the Armed Forces Special Powers Act (AFSPA) and the Public Safety Act (PSA), which grant sweeping powers to security forces and have led to several cases of human rights violations in the state. When constitutional rights are undermined and all forms of dissent are outlawed, the line between peaceful protest and armed resistance often gets blurred, radicalising even the moderate voices. The PSA continues to be used repeatedly to detain Kashmiri separatist leaders. In such an atmosphere, ‘popular’ discontent and local ‘uprisings’, fuelled by the separatist forces, are being exploited by state and non-state actors in Pakistan.
The two most alarming aspects of the current phase of Kashmir’s terrorism-driven insurgency are the rise in the number of homegrown militants, and the growing legitimacy of the militancy among the civil society and the educated classes of the Valley. It would be self-defeating to put the entire blame for Kashmir’s present predicament on Pakistan.
The use of modern information technology by the militants today to further their strategic objectives reflects another important departure from the past. The suppression of their rights to freedom of expression in the physical space has pushed the Kashmiri youth towards the ‘virtual’ space to vent their resentment and feeling of alienation. The battlefield is now a multidimensional one, encompassing both physical territory and cyberspace. The insurgents are exploiting the internet for recruitment, propaganda and support, as well as for strengthening their cross-border linkages.
The indications of a shift from the physical to the virtual domain were already visible during the 2010 protests. At the time, misguided and angry youth in Kashmir displayed their resentment by engaging in heavy stone-pelting and massive protests using social media tools, particularly text messaging via mobile phones. Social media has shifted the paradigm in terms of the tools available to protesters in Kashmir. They no longer need to resort to illegal measures to protest and, instead, social media has given them the space to raise awareness, spread information and plan protest rallies through completely legitimate means.The number of people in Kashmir with access to social media has increased significantly from 25 percent in 2010 to about 70 percent by the end of 2015.
The violent protests, sparked off in July 2016 by the killing of a Hizb-ul Mujahideen (HuM) commander, Burhan Wani, stunned the Indian security agencies, paralysing them and rendering them incapable of reacting appropriately. In the last five years before his death, Wani had emerged as the face of HuM. He threatened security forces and attracted more youth into the insurgency. Known as a top jihadist recruiter in the Kashmir Valley, Wani was part of the breed of educated youth, who joined the insurgency after the 2010 street protests. Before Wani, most of the Kashmiri insurgents operated clandestinely, even covering their faces when in public spaces. They were not commonly known among the people, except when their names were released by security forces after being killed. However, Wani—whose most enduring image is perhaps the one photographed against a picturesque Kashmir hillside, dressed in combat uniform and bearing a rifle—has managed to make the insurgency appear ‘glamorous’. He posted his photographs and videos on social media that were widely circulated in Kashmir, especially among the youth, soon allowing him to become a household name.
In August 2015, Wani uploaded a video on Facebook, carrying messages released by other jihadist leaders, that called for the setting up of a Khilafat in the region. He urged the youth to join him and asked the police to eschew their fight against the terrorists. The video, which was made with Wani and two others in combat fatigues, with a Kalashnikov and a holy Quran prominently at his side, was widely circulated on WhatsApp in Kashmir. In another video in early June 2016, this time uploaded on both YouTube and Facebook, Wani threatened to launch attacks against the proposals to set up separate Sainik colonies and township for Kashmiri pundits. He warned that HuM “will act against every man in uniform who stands for the Indian Constitution”, while asking the youth to keep a record of police personnel of their locality and provide information about their activities.
Zakir Rashid Bhat, HuM’s new commander and successor of Burhan Wani, released a video message in August 2016, which was widely circulated through WhatsApp. Calling upon the people to support the protests, Rashid asked the Kashmiri youth to boycott the recruitment drive for special police officers, warning them that they will be used to “create another Ikhwan”, a pro-government counter-insurgency group, created in the 1990s comprising reformed Kashmiri insurgents. In October 2016, he released another video claiming that Sikh militants have also requested to join HuM, and asking the Kashmiri youth to snatch weapons from the security forces. His videos and messages that were widely circulated among the Kashmiri youth on various social networking sites provoked several gun snatching incidents in South Kashmir, which unnerved the security agencies. From July till mid-October, more than five dozen weapons, including AK-47s, INSAS, Carbine, SLR, .303 rifles, were snatched from police personnel. These were seen as a “cause of concern” by Kashmir’s Army Commander, while Kashmir’s Inspector General of Police, Syed Javed Mujtaba Gilani claimed that the snatched weapons were likely to fall into the hands of people who “may use them for militancy”.
Through the use of relatively cheap and accessible media, HuM has been trying to recruit the Kashmiri youth to their cause. In fact, Wani’s expanding base of supporters had translated into a sudden upsurge in the numbers of local militants. In 2015, the Kashmir police examined the cases of 111 youngsters who had joined militancy; 58 of them eventually returned home. At least 88 of them were between the ages of 15 and 30, and more than half were radicalised though the internet. As more youth joined militant ranks, the number of locals operating in the region outnumbered foreign terrorists for the first time in a decade. According to Intelligence Bureau sources, more than 100 Kashmiri youth joined the insurgency after Wani’s death. While the numbers may be contested, the fact remains that today’s ‘homegrown’ Kashmiri insurgents represent a new phenomenon, entirely different from what Kashmir had become accustomed to during the last two decades. India’s former National Security Adviser, M K Narayanan, admits that “in marked contrast to earlier phases of trouble in Kashmir, the present movement is almost entirely home grown”.
South Kashmir’s four districts – Pulwama, Anantnag, Shopian, Kulgam – have increasingly become epicentres of the insurgency. They had been relatively calm during the height of insurgency in the 1990s, but the situation has changed during the last few years. The Islamic proselytising sect Tablighi Jamaat has mushroomed in South Kashmir and has financed training camps in all the four districts, with its headquarters in Tral. Since 2014, there has been a rise in the number of local youth taking up arms. It is no longer required to send Kashmiri youth to Pakistan for training or even for indoctrination as computers and mobiles phones have made the terrorists’ task easier.
Wani’s tech-savvy tactics turned south Kashmir into a hotbed of insurgency. During the first half of 2015, two dozen fresh inductions into the militancy were reported from South Kashmir.Wani’s recruitment strategy factored in the changing psyche of South Kashmir’s youth, who have become increasingly vulnerable to jihadist recruiters. In August 2015, an Urdu journalist was quoted as saying that the terror groups “now don’t waste time on sending youngsters to Pakistan. They first ask them to get a weapon. Then they assign them a target. Those who clear the first two stages are recruited. This serves an ulterior purpose: once a youngster carries out a strike, he can’t go back”. A police officer posted in South Kashmir claimed that Burhan’s popularity caused an upsurge in recruitment and training of local youth in Kashmir; more than 60 local militants were active in South Kashmir in July 2016, who received their training in local camps. According to a media report in mid-August, all the four districts of South Kashmir descended into anarchy with no hint of administrative apparatus at work. As thousands held ‘Azadi’ rallies, only three police stations functioned out of 36 in South Kashmir.
An environment with pre-existing violence or political tension is an enabling condition of jihadism. As a recent study highlights, the “synergy between jihadism and violence, whether perpetrated by repressive regimes, militia rivalries, terrorist groups, sectarian differences, tribal tensions, criminal organizations, or foreign intervention. Jihadism exploits local tensions; it fuels and is in turned fuelled by these tensions.” A conflict zone provides jihadist groups with permissive environments to proselytise and recruit. Unfortunately, Kashmir is on the verge of becoming such a conflict zone, where jihadist movements are likely to grow as local groups adapt to it to fit their needs.
Realising the crucial role of cyberspace in stimulating a global Islamist identity among Kashmiri Muslims, radical and jihadist organisations have been trying to create a cyber Islamic environment in Kashmir, which would provide them with a psychological platform to transmit their message for propaganda, indoctrination and recruitment to ever-expanding audiences. One of the most relevant aspects of the ‘virtual dimension’ of the Kashmir insurgency has been its gradual tilt towards global Islamist extremism. A Kashmiri police officer compared the Internet with “a tap running 24×7, gushing out Islamist propaganda” over which the police have no control.
For many years, Syed Ali Shah Geelani and many of his hardcore followers have sought to frame their struggle for ‘Azadi’ entirely in Islamic terms with very little success. However, it is only recently that there has been a sudden surge in Islamic terminology and symbolism in Kashmir’s socio-political landscape that is gradually delegitimising the previously dominant ethno-nationalist agenda of the insurgency. The People’s Democratic Party politician and former Deputy Chief Minister of the state, Muzaffar Hussain Baig, has rightly warned that Kashmir’s conflict “is on the verge of becoming religious extremism, which is not a political goal but a religious vision…infecting the hearts and minds of youths”. As explained earlier, HuM’s Kashmiri cadre have begun to employ propaganda tools and imagery used by the IS, including the rhetoric of a worldwide caliphate. During the initial protests in Kashmir after Wani’s death, IS flags were seen along with those of Pakistan.
In January 2016, the United Arab Emirates (UAE) deported Sheikh Azhar-ul-Islam, who hailed from Kangan tehsil in Ganderbal district. He became the first Kashmiri youth to be arrested by the National Investigation Agency for alleged links with IS. It is suspected that he became attracted to IS’ ideology after hearing sermons in the local mosque, and later online contacts with like-minded people facilitated his trip to UAE. The GOC of the Indian Army in Kashmir has already termed IS “a live threat that cannot be ignored”.
As a part of a survey conducted by the state police on youth radicalisation in Kashmir, messages, posts and conversations were intercepted based on certain keywords on popular online platforms. Out of 500,000 conversations related to certain keywords, around 100,000 were identified as “a matter of concern” by the Additional Director General of Police, CID, S M Sahai. Besides easy availability of Internet, another key factor responsible for growing Islamic radicalisation has been the decrease in the practice of Sufi Islam, the traditional form of religious practice in this region, and the growing congregations of the Wahhabi ideology through various Ahl-e-Hadith factions.
This growing radicalisation trend has replaced whatever little was left of the syncretic practices of Islam in Kashmir. The language of the recent mass protests has also been more religious than political. An organisation called Ittehad-e-Millat has come into being, consisting of elements from religious organisations such as the Jamaat-e-Islami and the Ahl-e-Hadith. Its leaders have been reportedly asking people, particularly in South Kashmir, to take an oath of turning away from mainstream political parties. All this indicates a larger political shift in the Valley and the Indian government is yet to fully grasp the dangerous potential of this change.
Taking a cue from the IS, Pakistan-based terrorist groups such as Lashkar-e-Tayyeba (LeT) and Jaish-e-Mohammad (JeM), along with HuM have entered the cyberspace. It was reported that the Jamaat-ud-Dawah (JuD), LeT’s charity arm, launched its cyber initiative during a conference on social media that it organised in Lahore in December 2015. Emphasising the growing importance of social media in disseminating propaganda, the JuD chief Hafiz Saeed called upon his followers to use this medium to strengthen the Kashmiri separatist movement. The LeT has been using the cyberspace to fuel the Kashmiri insurgency. A few years ago, it used the Voice over Internet Protocol (VoIP) – a technology that allows data encryption making it difficult to decode messages – for communication purposes. The LeT is said to have its own private VoIP, Ibotel, to communicate with its cadres in both Pakistan and Kashmir.
If cyber-insurgents can be readily identified, it would be easy to track and monitor their activities. But the cyberspace provides anonymity to the insurgents, allowing them to cloak their identity and activities. Cyberspace “is an unregulated environment in which anonymity provides more opportunities than ever to disseminate extreme views, deliberate misinformation, and create hoaxes without revealing the person or organisation behind the creation of the content”. Despite the sustained efforts of security agencies, according to an internal communication of Jammu and Kashmir Police in 2014, the terrorists and separatists could have uninterrupted communications through VoIP and other social media platforms such as Skype and WhatsApp. Moreover, there was concern that most of the offensive and subversive Facebook pages that the police got blocked in early 2014 were reactivated. For instance, a Pulwama-based group restored a page blocked twice by the police.
Pakistan’s state and non-state actors are conducting aggressive intelligence against Indian security personnel in Kashmir for accessing strategic information. It was revealed in March 2016 that the Pakistan’s Inter-Services Intelligence (ISI) was using a software ‘SmeshApp’, easily available on Google playstores, to infect the smartphones and personal computers of Indian security personnel, particularly those deployed along the India-Pakistan border. Using this spyware, Pakistani handlers were luring the Indian Army, the Border Security Forces and the Central Reserve Police Force personnel through Facebook accounts. Once installed, the app could track all movements, phone calls, messages and photographs, with the mobile phone and the Facebook account acting as a database. The pressing need to prevent vital information from being compromised and being released to mainstream media, forced the Union Ministry of Home Affairs to issue fresh guidelines for all the Central Armed Police Forces (CAPFs). These sought to regulate the storing and sharing of secret operational and service data on internet based social media platforms such as Twitter, Facebook, WhatsApp, YouTube, LinkedIn, Instagram, and others.
According to an analysis of reactions on social media platforms, after the killing of Burhan Wani, during July 8-14, 2016, it was found that out of a sample of 126,000, 45 percent of respondents were from unknown geographical locations, 40 percent from Indian locations and about eight percent from Pakistan. None of this should come entirely as a surprise. As there is no robust mechanism for cyberspace surveillance in Kashmir, insurgents and separatists are tweeting or commenting unchecked on social media that can spark off trouble. Minister of State for Home, Hansraj Ahir, told the Rajya Sabha in 2016 that the “Pakistan strategy has been to try and promote radicalisation through vested groups and social media so that this can be given the shape of a civil resistance”.
It can be argued that the cyber dimension has become prominent in escalating protests and political violence in Kashmir in the recent years. The most extreme and catastrophic expression of this trend was seen following Wani’s death. The government buildings were targeted as usual, but violence this time was not confined to government symbols alone; families of security, and civilians seen as ‘collaborators’ also increasingly came under attack. The large-scale protests benefited to a greater degree from the newfound ability of protesters and insurgents to send and receive information on platforms that were not controlled by the establishment they were up against. Clearly, social media has offered insurgents a unique platform for preparation as well as after-action deliberation. The unconstrained circulation of videos and pictures allows the perpetuation of a certain narrative that, in turn, fuels further unrest.
The emergence of electronic jihad and cyber insurgency as modern manifestations of terror have made winning the hearts and minds of local people and, simultaneously, discrediting terrorist propaganda the most challenging task for any counter-insurgency strategy. Censorship, removal of content and government sponsored counter-narrative efforts are mostly inadequate in suppressing extremist ideology from spreading online. If not handled with greater energy, planning and resources, the terrorist and insurgent propaganda could emerge as the dominant narrative.
If Kashmiri youth are to be dissuaded from joining the e-jihad, it is important to find more effective strategies to discredit Islamist radicals both on the battlefield and in cyberspace. The Indian government needs to better understand the full scope of ‘information engagement’ in its counterinsurgency policy. The police, the paramilitary forces and the Army in Kashmir must assertively employ all available social media platforms as well as all traditional propaganda tools to present an accessible, helpful, professional, efficient and accountable image of themselves. In Kashmir, this kind of perception management should be central to the tough task undertaken by security forces.
Jihadist recruiters prey on underlying grievances to recruit volunteers, both transnational and national. India’s security establishment would do well to put less emphasis on a heavy-handed, knee-jerk, and only tactical response to terrorism-driven insurgency in Kashmir. Given the growing importance of the insurgency’s virtual dimension, it is vital to frame a ‘strategic narrative’; a compelling storyline which can explain the government’s side convincingly. Its absence means that the counter-insurgent cedes the crucial virtual dimension to the insurgents, allowing them to shape the narrative instead to suit their purpose.
The Indian government has come under criticism for shutting down all Internet and mobile communication during the 2016 protests, instead of effectively countering the cyber insurgency waged by local insurgents and Pakistan-based jihadi cyber networks. The government’s move deprived its security agencies of vital clues, trends and information in the cyberspace. Cyberspace can be useful for counter-insurgency as well. Although the networking effects of cyberspace allow insurgents to link as never before, they also allow security agencies to map social media networks, providing vital clues about the leadership, structure, whereabouts and insurgents that would otherwise be impossible to gain.
Cyber-intelligence is unlikely to follow the classic intelligence cycle. Hence, the intelligence structures at all levels in Kashmir should be connected by robust and secure communications architecture. Given the rapidly emerging threats from cyberspace, the Indian government should form a dedicated cyber warfare/cyber security team in Kashmir, employing cutting-edge technology. Intelligence agencies should remain vigilant for indications of emerging threats – a threat that is amorphous and poorly understood, but one that could rapidly and unexpectedly emerge.
Post-demonetisation, the declared objective of the central government to promote cashless economic transactions cannot be achieved without ensuring uninterrupted Internet connectivity. The existing counter-insurgency practice of shutting down the Internet in Kashmir after large-scale violence and unrest must take this aspect into account. The Indian government cannot expect to win back the goodwill of the Kashmiri population if the Internet is shut down for indefinite periods, paralysing their economic activities.
There is a lack of a specialist culture in India’s security agencies and armed forces. There are no cyber specialists or information warfare specialists, who would continue to work in their area of specialisation after their limited tenures. The paramilitary and the Army continue to be led by, what is often referred to as, generalist officers. Even when these officers develop a degree of specialisation in the cyber domain, their next appointment often takes precedence over retaining domain expertise.India’s cyber capabilities lag significantly behind global players, and due to “little control over the hardware used by Indian Internet users as well as the information that is carried through them, India’s national security architecture faces a difficult task in cyberspace”. Despite having a National Cyber Security Policy (2013) and a National Cyber Security Coordinator, the overall ecosystem of cyber security in India has not improved much.For counterinsurgency strategy in cyberspace to be effective, India must develop mechanisms for ensuring that global best practices are translated into practice.
The homegrown insurgents that the government forces have been fighting in Kashmir are mostly small and loosely linked. Notwithstanding differences over tactics or goals, they have an alliance of convenience with Pakistan-based extremist and terror groups. Capturing territory or overthrowing the government may be the long-term goal of insurgents, but their immediate objective is to tie down the government and provoke it to take disproportionate measures that could further alienate the local population.
In today’s fast-changing socio-political scenario in Kashmir, it is not sufficient to focus on organisational and doctrinal changes in the military domain alone. In the long run, what really counts is the large-scale mobilisation of people. Cyber-mobilisation has emerged as a crucial element, not just in generating numbers of fighters but, more importantly, in inspiring violence and struggle. Thus, the government needs an effective cyber counter-insurgency strategy in Kashmir, turning more attention to analysing, influencing and countering the mobilisational tactics of the insurgents.
The Internet revolution in the Kashmir Valley is not just altering the way local people think but also changing the way they exhibit resistance to the establishment. Although jihadist guerrillas are not going to be replaced by cyber-insurgents operating in a virtual battlefield, cyber dominance in Kashmir may well make the difference between success and failure. India has to contend with a restless Kashmir, which has been exposed to incessant online provocations. New Delhi cannot afford to lose control of the narrative. Contesting the war of narratives is as vital as restoring normalcy.
 Audrey Kurth Cronin, “Cyber-Mobilization: The New Levéeen Masse,” Parameters (2006): 77-87.
 Manuel Castells, “A Sociology of Power: My Intellectual Journey”, The Annual Review of Sociology 42 (2016): 1-19, http://www.annualreviews.org/doi/pdf/10.1146/annurev-soc-081715-074158
 John Mackinlay, The Insurgent Archipelago (London: C Hurst, 2009).
 Bill Gertz, iWar: War and Peace in the Information Age (New Delhi: Threshold Editions 2017).
 Joseph S. Nye Jr., “Hard, Soft, and Smart Power,” in The Oxford Handbook of Modern Diplomacy, eds. Andrew F. Cooper, Jorge Heine, and Ramesh Thakur (Oxford: Oxford University Press, 2015).
 Richard Norton-Taylor, “UK military chiefs clash over future defence strategy,” The Guardian, January 18, 2010, https://www.theguardian.com/uk/2010/jan/18/military-defence-spending-army-navy.
For more on cyber mobilisation, see Timothy L. Thomas, “Cyber Mobilization: The Neglected Aspect of Information Operations and Counterinsurgency Doctrine,” in Countering Terrorism and Insurgency in the 21st Century: International Perspectives, ed. James J. F. Forest (Westport, Connecticut: Praeger Security International, 2007).
 Angel Rabasa, et al, Beyond al-Qaeda: Part 1, The Global Jihadist Movement (Washington, DC: RAND Corporation, 2006), 17.
 “Remarks by President Obama and Prime Minister Cameron of the United Kingdom in Joint Press Conference”, The White House, January 16, 2015, https://www.whitehouse.gov/the-press-office/2015/01/16/remarks-president-obama-and-prime-minister-cameron-united-kingdom-joint-.
 Cited in Thomas Rid and Marc Hecker, War 2.0: Irregular Warfare in the Information Age (London: Praeger Security International, 2009), 212.
 US Department of State, News Transcript, “Stennis Troop Talk”, April 15, 2016, https://www.defense.gov/News/Transcripts/Transcript-View/Article/722859/stennis-troop-talk
 IANS, “Over 70 per cent terrorists using cyber space: PMO cyber coordinator,” The Times of India, September 30, 2016, http://timesofindia.indiatimes.com/city/delhi/Over-70-per-cent-terrorists-using-cyber-space-PMO-cyber-coordinator/articleshow/54604754.cms.
 Sushil Aaron, “The Kashmir manifesto: Delhi’s policy playbook in the Valley,” The Hindustan Times, July 11, 2016.
 A S Dulat, Kashmir: The Vajpayee Years (New Delhi: HarperCollins, 2015).
 David Kilcullen, The Accidental Guerrilla: Fighting Small Wars in the Midst of a Big One (Oxford: Oxford University Press, 2009).
 Indian Army, Doctrine on Sub-Conventional Operations, (Simla: Headquarters Army Training Command, 2006).
The full text of the law, The Unlawful Activities (Prevention) Amendment Act, 2012, can be accessed at http://indiacode.nic.in/acts-in-pdf/032013.pdf; Also see, The Unlawful Activities (Prevention) Act, 1967, updated till December 2016, http://nyaaya.in/law/359/the-unlawful-activities-prevention-act-1967/#section-12
Ikram Ullah, “India is Losing Kashmir,” Foreign Policy, May 5, 2016, http://foreignpolicy.com/2016/05/05/india-is-losing-kashmir/.
 Christopher Snedden, Understanding Kashmir and Kashmiris (London: Hurst & Company, 2015), 271.
 Mimi Wiggins Perreault, “Social Media Amplify Concerns in India’s Jammu and Kashmir State,” The United States Institute of Peace, October 21, 2010, http://www.usip.org/publications/social-media-amplify-concerns-in-india-s-jammu-and-kashmir-state.
Toufiq Rashid, “In new video, J-K militant Burhan Wani asks youth to join him,” The Hindustan Times, August 26, 2015.
Bashaarat Masood, “Burhan warns of attacks on Sainik colonies.” The Indian Express, June 8, 2016.
 Ashiq Hussain, “Ex-engineering student emerges as Wani’s successor in Hizbul video,” The Hindustan Times, August 17, 2016, http://www.hindustantimes.com/india-news/former-engineering-student-emerges-as-burhan-wani-s-successor-in-new-hizbul-video/story-qiH7PAEO0cmr1X9w9SvDTM.html.
 Anil Bhatt, “Concern over weapon-snatching spurt in Kashmir,” PTI, October 19, 2016, http://ptinews.com/news/7987620_Concern-over-weapon-snatching-spurt-in-Kashmir-.
 Sameer Yasir, “Kashmir unrest: Increase in weapon-snatching incidents raises insurgency worries,” Firstpost, October 25, 2016, http://www.firstpost.com/india/kashmir-unrest-increase-in-weapon-snatching-incidents-raises-insurgency-worries-3071054.html.
Sandeep Unnithan, “Kashmir’s new militant tide,” India Today, July 30, 2015, http://indiatoday.intoday.in/story/kashmirs-new-militant-tide/1/455226.html.
 Poonam Agarwal, “Terrorists or Rebels? Here’s What Kashmiri Militants’ Families Say,” The Quint, December 17, 2016, https://www.thequint.com/kashmir-after-burhan-wani/2016/12/17/terrorists-or-rebels-the-quint-meets-bereaved-kashmiri-families-hizbul-mujahideen-jammu-and-kashmir-burhan-wani.
 M.K. Narayanan, “Address the ‘new normal’ in Kashmir,” The Hindu, October 10, 2016.
 “Radicalisation of the Kashmiri Mind”, The Open Magazine, September 5, 2016, 28.
Sandeep Unnithan, “Kashmir’s new militant tide,” India Today, July 30, 2015, http://indiatoday.intoday.in/story/kashmirs-new-militant-tide/1/455226.html.
Sandipan Sharma, “Glamour, gadgets and social media: HizbulMujahideen hard sells militancy to J&K’s youth,” Firstpost, August 16, 2015, http://www.firstpost.com/india/glamour-gadgets-and-social-media-hizbul-mujahideen-hard-sells-militancy-to-jks-youth-2393728.html.
Muzamil Jaleel, “The worry: What Burhan Wani’s death could give life to,” The Indian Express, July 9, 2016, http://indianexpress.com/article/india/india-news-india/the-worry-what-burhan-wanis-death-could-give-life-to-hizbul-mujahideen-2902423/.
 M Saleem Pandit, “No cops in four South Kashmir districts as protests rage,” The Times of India, August 23, 2016, http://timesofindia.indiatimes.com/city/srinagar/No-cops-in-four-South-Kashmir-districts-as-protests-rage/articleshow/53818227.cms.
The Jihadi Threat: ISIS, al-Qaeda, and Beyond, The United States Institute of Peace, December 2016/January 2017), 28, https://www.usip.org/sites/default/files/The-Jihadi-Threat-ISIS-Al-Qaeda-and-Beyond.pdf.
 Sandeep Unnithan, “Kashmir’s new militant tide,” India Today, July 30, 2015, http://indiatoday.intoday.in/story/kashmirs-new-militant-tide/1/455226.html.
Yoginder Sikand, “Syed Ali Shah Geelani And The Movement For Political Self-Determination For Jammu And Kashmir ( Part IV),” Counter Currents, September 24, 2010, http://www.countercurrents.org/sikand240910.htm.
Aarti Tikoo Singh, “Kashmir conflict on verge of merging with IS war: Baig,” The Times of India, August 25, 2016, http://timesofindia.indiatimes.com/india/Kashmir-conflict-on-verge-of-merging-with-IS-war-Baig/articleshow/53852302.cms.
 PTI, “ISIS flags raised in Kashmir,” The Hindu, June 12, 2015.
 Sheikh Nazir, “Shutdown in central Kashmir’s Preng to protest arrest of local youth over alleged IS link,” Greater Kashmir, February 1, 2016, http://www.greaterkashmir.com/news/kashmir/shutdown-in-central-kashmir-s-preng-to-protest-arrest-of-local-youth-over-alleged-is-link/208341.html.
 “Islamic State group a live threat, can’t be ignored in Valley: Army,” Greater Kashmir, November 28, 2015, http://www.greaterkashmir.com/news/kashmir/islamic-state-group-a-live-threat-can-t-be-ignored-in-valley-army/202802.html.
 Shweta Desai, “Log out to get ‘de-radicalised’: Kashmir to monitor social media and counsel radicalised youth,” Daily News and Analysis, June 3, 2016, http://www.dnaindia.com/india/report-log-out-to-get-de-radicalised-kashmir-to-monitor-social-media-and-counsel-radicalised-youth-2219196.
 Rahul Pandita, “Kashmir unrest: ‘Mahashay, marwa na dena’; how things came to pass in J&K,” September 22, 2016, http://www.firstpost.com/india/kashmir-unrest-mahashay-marwa-na-dena-how-things-came-to-pass-in-jk-3015236.html.
Munish Sharma “Lashkar-e-Cyber of Hafiz Saeed”, Institute for Defence Studies and Analyses, March 21, 2016, http://www.idsa.in/idsacomments/lashkar-e-cyber-of-hafiz-saeed_msharma_310316.
Aarti Tikoo Singh, “Lashkar’s own Skype frazzles Indian intelligence,” The Times of India, April 30, 2012, http://timesofindia.indiatimes.com/india/Lashkars-own-Skype-frazzles-Indian-intelligence/articleshow/12934037.cms.
 “Social Media as a Tool of Hybrid Warfare”, The NATO Strategic Communications Centre of Excellence, (May 2016), 8.
 Ahmed Ali Fayyaz,“Concern over radicalisation of educated J&K youth,” The Hindu, August 8, 2014.
Pranay Upadhyay, “Honeytraps on Facebook, spyware: How Pakistan is snooping on Indian troops,” News 18, March 15, 2016, http://www.news18.com/news/india/honeytraps-on-facebook-spyware-how-pakistan-is-snooping-on-indian-troops-1216191.html.
 “Government issues fresh guidelines for social media use in CAPFs”, The Indian Express, December 18, 2016, http://indianexpress.com/article/india/home-ministry-fresh-guidelines-central-paramilitary-forces-social-media-4433694/.
Himanshi Dhawan, “Pakistan may be waging proxy war in cyberspace too,” The Times of India, July 19, 2016, http://timesofindia.indiatimes.com/india/Pakistan-may-be-waging-proxy-war-in-cyberspace-too/articleshow/53273657.cms.
 “Pakistan promoting radicalisation among youth via social media: Government”, The Indian Express, July 20, 2016, http://indianexpress.com/article/india/india-news-india/jammu-kashmir-radical-pakistan-youth-social-media-hansraj-ahir-india-government-2925774/.
Vivek Chadha, Even If It Ain’t Broke Yet, Do Fix It: Enhancing Effectiveness Through Military Change (New Delhi: Pentagon Press, 2016), 129.
Arun Mohan Sukumar, “Upgrading India’s cyber security architecture,” The Hindu, March 9, 2016.
Subimal Bhattacharjee, “Too casual an approach to cyber security,” The Hindu Business Line, October 3, 2016.
This article originally appeared in ORF Occasional Paper 106
By on 14 February, 2017
Countering the militancy in Kashmir has become a highly challenging task due to the exploitation of new information and communication technology by insurgent groups. The battlefield is now a multidimensional one, encompassing both physical territory and cyberspace. The overall capabilities of insurgents have been enhanced by tools in cyberspace that are inexpensive, ever more sophisticated, […]
India is a sovereign nation; is it digitally sovereign, too? This paper examines the degree to which India is self-reliant in electronic hardware. After all, for a country to be self-reliant in the information age, it has to either attain indigenous capability in electronic manufacturing and services or be equipped to protect data and mitigate the threats associated with supply chain vulnerabilities. This paper refers to self-reliance in electronic hardware as ’electronic sovereignty or ’e-Swaraj’. It aims to provide an overview of risks in the global supply chain and suggest solutions to mitigate them in the Indian context.
Information and communication technologies (ICT), and the Internet in particular, have become major driving forces of socioeconomic development: by one estimate, a 10-percent increase in mobile and broadband penetration increases the per capita GDP by 0.81 percent and 1.38 percent, respectively, in developing countries. India, too, has taken steps to utilise ICT for development. Transforming the country into a digitally empowered society and knowledge-driven economy is at the heart of ‘Digital India’, which is among the Modi government’s flagship programmes. The initiative seeks to guarantee the availability of government services and information on a 24×7 basis irrespective of the user’s geographical location. For this programme to succeed, it is necessary to set up information networks and ensure their interconnection with almost all systems of the state like transport, energy, railways, banking, government institutions or public services. This interconnection is bound to deepen with the advancements in technology and Internet of Things.
ICT systems and digital technologies are the mainstay of an information society as these networks and physical infrastructure form the digital highways on which data flows. Most equipment and technology for setting up such infrastructure in India are currently procured from global sources. These systems are vulnerable to cyber threats just like any other connected system but perhaps the most important known attack vector is through the digital ’supply chain’, information infrastructure and various networks and systems of government and the private sector that extensively leverage latest technology and commercial electronic components, viz. hardware, software and firmware sourced from global sources. Global procurement has an inherent advantage of getting state-of-the-art technology at competitive prices. But it also raises the possibility of the supply chain being compromised by a state or non-state adversary in the form of tampering components during their development, delivery, or firmware maintenance. Another concern is that of deliberately infecting products and services to render their operations under the control of a third party to extract information, manipulate data integrity or make the system fail under specific conditions.
The European Union Agency for Network and Information Security (ENISA) defines ‘Supply Chain”, “Integrity” and “Supply Chain Integrity” as:
(a) Supply Chain: “A system of organizations, people, technology, activities, information and resources involved in moving a product or service from producer to customer.”
(b) Integrity: “A concept that is related to perceived consistency of actions, values, methods, measures, principles, expectations and outcome.”
(c) Supply Chain Integrity (SCI): “Indication of the conformity of the supply chain to good practices and specifications associated with its operation.”
The ’supply chain’ in effect encompasses all actions associated with the lifecycle of a product including conception, design, production, quality assurance, transportation, usage, maintenance, discard and even reuse/ recycle, where feasible. SCI is a complex interaction between various processes, organisations, technology and resources. Integrity for information products implies that the product behaves in exactly the same manner as understood when it had been procured. The normative aim of SCI is to ensure that procured information system and components fulfill the sought specifications without compromising the privacy of the user. This paper examines the nuances associated with India’s information systems supply chain.
The primary component related to electronic sovereignty is communication networks and its components. These systems used by governments, businesses, telecommunication providers and end-users are often imported. The same holds for software and firmware. To illustrate the diversity of supply chain sourcing, journalist-author Thomas Friedman used the example of the average Dell laptop and the geographical distribution of its sources. This distribution is reproduced below:
The threats projected in the electronic supply chain include the following:
(a) Hardware or Software containing Malicious Logic: The product supplied is intentionally injected with malicious logic during manufacture or project implementation stage to gain or change sensitive information, denial of service or even destroy the system under specific conditions.
(b) Non-Genuine Hardware or Software: Pirated software or fake hardware can cause immense damage to the mission critical system or critical infrastructure as a product which is not genuine threatens the reliability of the system. As per BSA’s Global Software Survey (May 2016), 39 percent of software installed on PCs around the world in 2015 was not licensed, and even in certain critical industries, unlicensed use was high. The broad statistics of unlicensed software usage are:
(c) Disruption in Supply Chain: Disruption in the supply chain on account of production failure, loss of reserve inventory owing either to natural causes (floods, earthquakes, hurricanes, tsunamis, etc.) or man-made crises like strikes and riots can directly affect operations which could be critical to the system.
(d) Vendors/Contractors: Vendors/contractors who are entrusted with project implementation by virtue of their job have an access to information which may be sensitive, and the feasibility of its exploitation for nefarious activities cannot be ruled out.
(e) Hardware/ Software Containing Unintentional Vulnerabilities: These are the unintentional “zero-day” vulnerabilities present in the system and lend themselves for exploitation if found out by adversary/non-state actors.
(f) Insertion of Malicious Software during Maintenance: Malicious software can also be injected into the system during maintenance by the vendor or during updating software.
(a) Lack of Indigenous Capability: If a nation does not have indigenous capability then it has to perforce rely on the global market which is a complex and interconnected arena involving people, processes and technologies. Components used in networks are manufactured in some countries, assembled in others and eventually sold across the globe. The systems may be contracted by local sellers and integrators and subsequently installed and operated by different organisations. In a competitive bidding system, the vendor with the lowest quotes gets the order: this system may lead to provisioning of substandard products and that too from a source which may not be conforming to standards.
(b) Lack of Policy Guidelines: Detailed procurement policies whether from an Indian vendor or from global sources will to an extent remove anomalies or grey areas. Indian defence procurement procedure, for example, has evolved over a period of time and to an extent addresses the issues, but the problem in electronic systems is that policy formulation is unable to keep pace with technological advancements. Lack of policies and guidelines makes it difficult to ensure equipment integrity.
(c) Non-Availability of Testing Facilities: For testing of procured electronic systems, there is a requirement of state-of-the-art facilities. Establishment of these is feasible only when the know-how is available—which will not happen unless the country obtains access to the complete technology. Also, system delivered to end-users cannot always be evaluated because of lack of appropriate evaluation approaches, methodologies and tools.
(d) Lack of Coordination: Coordination and sharing of information among government agencies as well as the private sector, especially of good practices, approaches and methodologies, needs to be enhanced.
(e) Non-Availability of Standards: Standards are a means of achieving harmonisation in processes so as to have a product/process which meets the desired specifications. The table below lists some standards laid down for supply chain and supply chain integrity. The list is not comprehensive, but indicative of the fragmented approach being taken in the field.
Classification and Identification of SCI Standardisation Efforts
|Classification||Standard Development Organisation||Standard||Comments|
|1.||Origins (sources) of supply chains||ISO SC27||ISO/IEC 27036: Guidelines for Security of Outsourcing||These are generic documents and not specific to SCI|
|2.||Processing and configuration||ISO SC31
iNEMI Supply Chain study group
HDPUG Supply Chain study group:
|RFID supply chain applications
Risk Modelling pilot
Data Exchange pilot
|Nothing specific to SCI|
|3.||Delivery and governance of the Supply Chain||NASPO (North American Security Products Organization)
|Nothing specific to
| N10656:update to ISO 27002: security techniques
Open Trusted Technology Framework
|Nothing specific to
|5.||Verification and checks||ISO TC247||Fraud Controls and Countermeasures
SEMI T20: Traceability (semiconductor industry)
|Nothing specific to SCI|
(f) Lack of Skilled Security Professionals: Availability of skilled manpower is an asset when it comes to managing information systems and handling cyber incidents, apart from ensuring ’cyber hygiene’ in processes.
As per the comprehensive National Cyber security Initiative of the USA:
“Risks stemming from both the domestic and globalized supply chain must be managed in a strategic and comprehensive way over the entire lifecycle of products, systems and services. Managing this risk will require a greater awareness of the threats, vulnerabilities, and consequences associated with acquisition decisions; the development and employment of tools and resources to technically and operationally mitigate risk across the lifecycle of products (from design through retirement); the development of new acquisition policies and practices that reflect the complex global marketplace; and partnership with industry to develop and adopt supply chain and risk management standards and best practices.”
Consistent with these principles, Section 806 of the National Defense Authorization Act for Fiscal Year 2011 authorises the Secretary of Defense or the Secretaries of the Army, Navy and Air Force to exclude vendors or their products if they pose an unacceptable supply chain risk. The US in fact emphasises on building both global and national capabilities to address supply chain risks without undermining international competitiveness and legitimate trade flow. These capabilities include:
China has emphasised indigenous innovation, placing a high priority on investing in domestic research and development (R&D) in all segments in the ICT sector, i.e. chips, hardware and software. China has adopted various administrative measures for the multi-level protection of information security (Multi-Level Protection Scheme or MLPS). This imposes several requirements on security products destined for use in information systems including the following:-
(a) The entity that researches, develops and manufactures the product must be invested or controlled by Chinese citizens, legal persons or the state and have independent legal representation in China.
(b) The core technology and key components must have independent Chinese or indigenous intellectual property rights.
(c) The entity that develops and produces the product must confirm that the product contains no functions or programs that are intentionally designed as a vulnerability, backdoor or Trojan.
(d) Products that have been listed in the Certification and Accreditation Administration of People’s Republic of China catalogues of information security products must acquire a certificate issued by the China Information Security Certificate Centre.
(e) For products containing encryption technology, the MLPS requires approval from the office of State Commercial Cryptographic Administration and no imported products with encryption functionality can be used without approval.
China is subjecting technical imports to heavy security scrutiny. It is investigating the encryption and data storage features of technology products sold by large foreign companies in the country. The authorities are focusing on whether the products pose a security threat.
Russia’s approach is similar to China’s, having implemented a certification regime that focuses on non-disclosed functionality. This intends to address concerns about backdoors and other functionality which might not be disclosed to users. The country is also creating a National Software Programme to reduce its dependence on foreign products and facilitate domestic production. It had initiated plans to migrate its computer infrastructure from Windows to open source operating systems like Linux.
In recent years, there has been an unprecedented demand for electronic goods and the requirement which was just $76 billion in 2013 is likely to cross $400 billion by 2020. The domestic estimated production is expected to reach $104 billion by 2020, implying a huge gap of $296 billion. With an aim to build domestic capacity in electronic manufacturing, government has released the National Policy on Electronics (NPE) in 2012. The vision articulated in NPE is “to create a globally competitive electronics design and manufacturing industry to meet the country’s need and source the international market.” The government has taken numerous initiatives for boosting domestic capacity in the electronics industry under the ‘Make in India’ programme. According to the Ministry of Electronics and Technology (MEITY), the semiconductor design market in India is expected to grow from $14.5 billion in 2015 to $52.58 in 2020. Electronics manufacturing is one of the pillars of Digital India, which focuses on its promotion with the target of net zero imports by 2020. Setting up two semiconductor projects in February 2014 has been approved by the government. These projects, were to be led by Jaiprakash Associates and HSMC Technologies India in collaboration with foreign firms, but even after 24 months of approval, no confirmation of their progress is available in the open domain.
One reason for India’s inability to nurture its domestic electronics manufacturing lies in the inefficient labour market, unreliable power supply, and inadequate transportation infrastructure.
Indigenous manufacturing is one of the many facets associated with supply chain integrity; even if a chip is manufactured in one country, the system for which it is required may need components/hardware manufactured/assembled elsewhere. It is unlikely that in today’s competitive global context, a country can fulfill its requirements from the domestic market alone because of inadequate expertise and wherewithal required for such ventures. The chip-making industry is highly competitive and has been around for about 50 years. It has well-established and well-entrenched players. China’s state-funded Semiconductor Manufacturing International Corporation was founded in 2000, but it could not make any difference to chip-making industry’s dynamics despite 16 years of state-sponsored effort.
The National Cyber Security Policy promulgated in 2013 also articulates the method for reducing supply chain risks:
(a) Create and maintain testing infrastructure and facilities for IT security product evaluation and compliance verification as per global standards and practices.
(b) Build trusted relationships with product/system vendors and service providers for improving end-to-end supply chain security visibility.
(c) Create awareness of the threats, vulnerabilities and consequences of breach of security among entities for managing supply chain risks related to IT (products, systems or services) procurement.
The government has launched the Digital India and Startup India programmes to boost digital economy and local entrepreneurship ecosystem. Indian companies which saw growth a few years ago are today facing competition from international companies which have deep pockets and access to technology. The position can be compared with Europe and China. Europe is more or less an extension of the US digital economy while China carefully nurtured local digital businesses before allowing global companies to enter. China’s digital economy will drive 21 percent of its GDP growth for the next 10 years, driven by the fact that its digital economy is indigenous and has employed and developed lakhs of highly skilled digital technology talent – many from rural parts of the country. Baidu and Tencent (the Google and Facebook of China) —two of hundreds of large internet companies in China – employ nearly 75,000 locals. Meanwhile, Europe struggles to develop real centers of innovation in technology. To have a thriving local digital economy and technology, the government must implement policies that enable and nurture local digital talent and give domestic companies a level-playing field.
In the competitive global marketing environment, information systems and network products are dependent on a complex, globally distributed and interconnected supply chain comprising a mix of both government and private organisations. Because of the very nature of the problem, complete elimination of risks associated with supply chain is not feasible. Hence, the aim should be to manage the risks to minimise the vulnerability and threats associated with it. The main challenges being encountered in mitigating the supply chain are enumerated as under:
(a) Cost: The most preferred approach is to look inward, i.e. indigenous software, hardware. These involve an additional cost in R&D. And R&D is both capital and time-intensive. Are we as a nation willing to wait till an indigenous capability is developed, which may take decades? The economies of scale will also weigh against this route.
(b) Public Private Partnerships (PPP): It is a well-known fact that the private sector has expertise and the wherewithal that is not available with government. Hence involvement of private industry as a partner with government is of paramount importance. PPP is required in fields associated with incident sharing, R&D and remedial action.
(c) Multiple Agencies: Supply chain has multiple stakeholders spread across a vast spectrum of engineering, technology, procurement agencies, system integrator and maintenance organisations. It is not humanely feasible to have a security vetting of all processes and associated people.
(d) Indigenous Capabilities in Manufacture: Current indigenous capability is at a nascent stage. According to the Minister of Communication and Information Technology, the government will push for setting up chip-manufacturing facilities in India. In February 2014, the government approved setting up two electronic chip-making plants entailing an investment of about Rs 63,412 crore. The government also offered sops for electronics manufacturing in eight cities. Given that initiatives will take time to fructify, however, the risk of malware looms large owing to India’s dependence on imports.
(e) Testing Facilities: India has only one Standardisation Testing Quality and Certification (STQC) organisation. There are six levels of testing, but the Kolkata-based laboratory has the capability of testing only up to level 4 against the requirement of testing up to level 7. This limitation can jeopardise equipment safety because the supplier knows that there can be no full testing.
Information security has a direct bearing on national security and the threat landscape is going to increase further with more and more systems getting interconnected. Following are some of the measures recommended for achieving self-reliance in core technologies and to mitigate the threats to information systems:
(a) Short-term Measures:
(ii) Import through a Central Agency: A central agency may be nominated for importing the requirements of government agencies and for critical systems needed for national security-sensitive organisations. This organisation should function as a repository of database of firms which can be trusted for procurement. The agency should also share good practices being followed globally.
(iii) Import Classification: Categorising the import of electronic equipment into two types: Type A, meant for critical infrastructure, sensitive establishments; and Type B, meant for normal usage. Stringent procedures can be laid for Type A and less stringent measures for Type B imports.
(iv) Audit: Audit by an independent organisation from commencement of project implementation till its final launch must form a part of the contract.
(v) Standards: Specifying standard-based processes will go a long way in risk mitigation. Following specified global standards enhances both transparency and trust between buyer and the seller/system integrator. All efforts must be made to bring procurements up to internationally acceptable standards.
(vi) Masking the End-User: The ultimate user in case of sensitive departments may be hidden from the vendor/original equipment manufacturer). This practice will be useful while dealing with targeted insertion of malware.
(vii) Malicious Code Certificate: As per DPP 2013, a certificate as under is to be provided by the vendor:
“This is to certify that the Hardware and the Software being offered, as part of the Contract, does not contain embedded malicious code that would activate procedures to:-
(a) Inhibit the desired and designed function of the equipment.
(b) Cause physical damage to the user or equipment during the exploitation.
(c) Tap information resident or transient in the equipment/networks.”
The firm will be considered to have breached the procurement contract in case physical damage, loss of information or infringements related to copyright and intellectual property rights are caused due to activation of malicious code in embedded software.
However, in case of breach of this clause, it is unlikely that vendor will accept the same. Moreover, by the time the vulnerability is known, the damage would have already been done. Nevertheless, it should form part of the deal as self-certification acts as a deterrent.
(viii) Ask the Experts: Associating experts with strategically important projects, from the request for information stage of procurement till its implementation.
(ix) Encryption: A sound encryption policy will facilitate trust among various agencies.
(x) Incident Response: It must be made mandatory to constitute computer emergency response teams for developing analysis and response capabilities.
(a) ‘Make in India’: The Make in India initiative should give impetus to the indigenous design, development and manufacture of system, sub system, components and software. This, however, should be restricted to sensitive portfolios only for the reasons mentioned above in the paper.
(b) Testing Capability: At present, there is only one STQC facility, that in Kolkata. This can be replicated in other locations. The fact that import of high-end equipment will only increase underlines the need for setting up more domestic testing facilities. Till indigenous facility is built, testing could be undertaken of random samples at third country premises.
(c) Policy and Guidelines Formulation: These are a must for developing indigenous capabilities as well as for streamlining import procedures, including testing of electronic inventory. Policies are the pillars for mutual trust between the government and private industry.
(d) Data Protection: Protection of a data is important, especially when the data available on the internet is residing outside the geographical boundaries of the country.
( e) Network Providers: Data rides on networks created by telecom service providers(TSPs). TSPs should be held accountable for providing communication channels free from malware infections. TSPs should devise measures to ensure that only malware-free devices are hooked to their network.
(f) Public- Private Partnerships: The expertise lies with private industry while formulation of policies is the domain of the government.
(g) Trusted Agency: Incorporating the Defence Research and Development Organization and Department of Electronics and Information Technology in strategic projects from conceptual stage to mitigate the problems.
India is experiencing a rapid transition to a digital technology-driven country. Although this shift carries enormous possibilities for the country’s growth, it also exposes it to cyber threats. The government has launched ‘Digital India’ and ‘Make in India’ programmes to digitally empower the society, but it has apparently failed to articulate on India’s data and digital sovereignty. As the world becomes increasingly interconnected, the protection of privacy, data and digital infrastructure of the nation would assume utmost importance. These factors would play a vital role in its digital sovereignty—or believing that a nation has attained e-Swaraj.
 Digital India: Unleashing Prosperity”, Deloitte, accessed March 1, 2016, https://www2.deloitte.com/content/dam/Deloitte/in/Documents/technology-media-telecommunications/in-tmt-tele-tech-2015-noexp.pdf
“From The World Is Flat by Thomas Friedman Dell Inspiron 600m Notebook: Key Components and Suppliers”, Do You Have The Right Practices In Your Cyber Supply Chain Tool Box? , NDIA Systems Engineering Conference October 29, 2014, accessed September 23, 2016, http://www.dtic.mil/ndia/2014system/16888WedTrack1Moss.pdf
 “US Govt Accountability Office Report to Congressional Requesters,,IT Supply Chain, National Security Related Agency Need to Better Address Risks”, Government Accountability Office, accessed September 23, 2016, http://www.gao.gov/assets/590/589568.pdf
 “Seizing Opportunity Through License Compliance, BSA Global Software Survey, May 2016”, BSA The Sofware Alliance, accessed August 23, 2016, http://globalstudy.bsa.org/2016/downloads/studies/BSA_GSS_US.pdf
 “Supply Chain Integrity -An overview of the ICT Supply Chain risks and challenges, and vision for the way forward (2015), accessed March 1, 2016, https://www.enisa.europa.eu/publications/sci-2015
 “The Comprehensive National Cybersecurity Initiative”, The White House, ,accessed August 23,2016, https://www.whitehouse.gov/sites/default/files/cybersecurity.pdf
 Cyber Supply Chain Risk Management white paper.pdf by Microsoft
 “Securing Our Cyber Frontiers NACOM-DSCI Cyber security Advisory Group Report”, DSCI, accessed September 23, 2016, https://www.dsci.in/sites/default/files/NASSCOMDSCI%20Cyber%20Security%20Advisory%20Group%20(CSAG)%20Report.pdf
Cyber Supply Chain Risk Management white paper.pdf by Microsoft
 Cyber Supply Chain Risk Management white paper.pdf by Microsoft
“Electronics Manufacturing”, Digital India, accessed March 1, 2016, http://www.digitalindia.gov.in/content/electronics-manufacturing
 “India Wants to Build its Own Chips to Satisfy Electronics Demand”, Bloomberg, accessed September 23, 2016, http://www.bloomberg.com/news/articles/2014-02-27/india-wants-to-build-its-own-chips-to-satisfy-electronics-demand
 “Does India really need a $5 billion semiconductor unit?”, The Economic Times, accessed February 15, 2016, http://economictimes.indiatimes.com/tech/hardware/does-india-really-need-a-5-billion-semiconductor-unit/articleshow/48156199.cms
 “Digital India is dying: Without intervention, Digital India looks to simply be a colony of US and China”, The Times of India, accessed August 22, 2016, http://blogs.timesofindia.indiatimes.com/toi-editorials/digital-india-is-dying-without-intervention-digital-india-looks-to-simply-be-a-colony-of-us-and-china/
 “Government to offer sops for electronics manufacturing in 8 cities: Ravi Shankar Prasad”, The Indian Express, accessed June 17, 2014, http://indianexpress.com/article/business/economy/government-to-offer-sops-for-electronics-manufacturing-in-8-cities-ravi-shankar-prasad/
 “Defence Procurement Procedure 2013”, Ministry of Defence, accessed March 12, 2016, http://mod.nic.in/writereaddata/DPP2013.pdf
“The Valley in Digital India”, The Hindu, Accessed March 5, 2016, http://www.thehindu.com/business/Industry/the-valley-in-digital-india/article6515038.ece
Attn: Eastern is spelt wrong in graph
(This piece originally appeared in the ORF Issue Brief No 159)
Abstract India is a sovereign nation; is it digitally sovereign, too? This paper examines the degree to which India is self-reliant in electronic hardware. After all, for a country to be self-reliant in the information age, it has to either attain indigenous capability in electronic manufacturing and services or be equipped to protect data and […]
Encryption has since become ubiquitous. Google, for instance, has made the Secure Socket Layer (SSL) encryption the default standard for its Gmail service and Google searches since 2010 and 2011, respectively. Internet users are also obtaining access to more sophisticated end-to-end encryption services for free, through applications like WhatsApp and Telegram. Encryption is also available as built-in security for devices such as Apple’s iPhone. This ubiquity has seen the resurgence of claims by law enforcement agencies that their ability to ‘lawfully’ intercept communication for criminal and terrorism-related investigations has been hampered. Encrypted channels allow their users to “go dark, maintain law enforcement agencies. Thus, they demand that companies retain access to all user communications and data, including encrypted data, and extend that access to law enforcement entities upon request. These demands have been met with strong resistance from supporters of encryption in both industry and civil society. They argue that any deliberate weakening of encryption would not only affect user privacy but also set back the overall standard of security in the market by many years.
While no easy answers to the debate have emerged, it appears that the result of this latest iteration of ‘crypto wars’ will change the nature of online exchanges. India, for its part, has taken steps to resolve this debate. In September 2015, the Indian government released a draft National Encryption Policy (hereinafter Draft Policy) that sought to set encryption standards and lay down conditions for decryption of information for lawful investigation. The draft was laudable in its intent to calibrate policy in response to rapid technological developments in India and abroad. However, the conditions set out under the Draft Policy were problematic enough that it drew widespread criticism, resulting in its swift withdrawal. The Indian government is now in the process of finalising a second draft of the National Encryption Policy and has solicited inputs from the ICT industry and civil society to make it more balanced and acceptable.
As complex as the issue of encryption is, its fundamental dilemmas can perhaps be encapsulated in one question: Is it paradoxical to seek secure law enforcement access to encrypted data? The answer, however, varies from one country to the next. ‘Preferable’ levels of encryption depend on a complex network of legal, political, economic and even social factors unique to each country. They will depend on how the country has traditionally treated privacy and what restrictions on free speech exist. They will also depend on the extent to which the country’s ICT industry is reliant on international services. Predominantly, they will depend on two things: the technological security and self-reliance of the country’s information infrastructure, and the expertise of cyber security personnel. The Indian narrative around encryption is one of considerable complexity.
The Draft Policy sought to establish the protocols and algorithms for encryption, key exchanges and digital signatures for all government entities, businesses and citizens. It allowed businesses and citizens to use encryption as long as they handed in the plaintext, encrypted text and the hardware/software used for encryption when such information was sought by law enforcement agencies. In what was among the most controversial provisions of the Policy, it mandated all businesses and citizens using encryption to retain the plaintext of encrypted communication for 90 days from the date of transmission or transaction. The Policy also mandated that every service provider (whether they were based in India or abroad) would need to enter into an agreement with the government to operate within the Indian market. The nature of this agreement and what it would entail was not clarified.
The Draft Policy, therefore, wanted to ensure lawful access to encrypted data through a combination of three measures. First, it imposed a quasi-licensing model where a vendor or encryption service provider would presumably enter into an agreement with the government to allow unrestricted access to data. Without complying with this provision, they will not be allowed access to Indian markets. This would have the effect of establishing a backdoor within the service for Indian law enforcement and intelligence agencies. The Indian government is known to have deployed this pre-condition at least once in the past, when it compelled Blackberry to allow special access to its Blackberry Messenger and Blackberry Internet Service email.
Second, the Draft Policy also required encryption service providers to provide the government with working copies of the software and hardware used for encryption. This ‘key recovery’ mechanism meant that even without the backdoor requirement, the government would probably retain the capability to decrypt all symmetric encryption that used the same key to encrypt and decrypt.
Third, in cases of information sent through end-to-end encryption and other methods that used asymmetric public-private keys — the government would not be able to obtain the plaintext through the service provider because the information is not retained on the service provider’s servers — the policy mandated that senders of such information would have to retain the unencrypted plaintext for 90 days. This would enable the government to bypass the service provider and obtain the content of communication straight from the source.
The threat of creating backdoors in information systems has been the focal point of controversy in the recent past. The Apple v. FBI case involving the iPhone 5C of the San Bernardino shooter was, in many ways, the flashpoint for a global conversation on encryption. In the matter, Apple resisted the FBI’s request to develop a new operating system that would allow it to bypass the phone’s passcode to access the encrypted content within. Among other things, Apple raised the concern that being compelled to write such code would be a violation of the company’s First Amendment right to freedom of speech. The company also raised concerns that a backdoor such as this, once created, would compromise the security of all devices. This ‘GovtOS’ would be a generic software patch that could be adapted for any iPhone. Even if the leak of that particular piece of code was prevented, the knowledge that the creation of such a code was possible would undermine security. It could also make Apple employees targets for extortion.
This becomes doubly important in the Indian context due to the existence of precedent in the Blackberry matter. While it is known that Blackberry provided special access to law enforcement to its devices, the means through which it was granted remains unknown. This uncertainty may have even catalysed the decline of Blackberry’s market share in the country since 2012. If a similar provision regarding a clandestine agreement between the government and an encryption provider, similar to that in the Draft Policy, is retained in the second draft, it would erode considerable trust in the cyber security market. It would prompt state-of-the-art encryption providers to either exit the Indian market or, at the very least, reconsider their engagement with the country. In the long run, the lack of access to advanced encryption products and tools is likely to hinder India’s projection of itself as a robust digital economy.
One of the arguments that Apple offered in its resistance of the FBI’s demands was that necessitating the writing of additional software was an arbitrary deprivation by the government of its liberty under the Fifth Amendment to the United States Constitution. The Fifth Amendment also protects a person from being compelled in any criminal case to be a witness against herself. A complementary provision has been provided under Article 20(3) of the Indian Constitution which reads, “No person accused of any offence shall be compelled to be a witness against himself.” If the second iteration of the encryption policy contains a provision mandating the retention of plaintext of encrypted information for 90 days, it may abrogate the right against self-incrimination under Article 20(3). The right against self-incrimination covers both oral as well as documentary evidence. Plaintext of one’s encrypted communication would be considered documentary evidence.
While Article 20(3) does not cover testimony in the form of a handwriting or DNA sample, or blood spatter, among others, this would not apply to decrypted copies of one’s messages or email. This finds support in a three-judge bench ruling of the Supreme Court in Selvi v. State of Karnataka. The court, while deliberating on the legality of an admission obtained through narco-analysis, expanded the remit of Article 20(3). It was held that any process that “impedes the subject’s right to choose between remaining silent and offering substantive information,” cannot be allowed.
The court expanded this further by urging the need to respect the privacy of mental processes. It explains, “While the scheme of criminal procedure as well as evidence law mandates interference with physical privacy through statutory provisions that enable arrest, detention, search and seizure among others, the same cannot be the basis for compelling a person to impart personal knowledge about a relevant fact.” Besides, search and seizure must be statutorily empowered. An executive policy on encryption cannot mandate the production of decrypted communications. If the new draft of the policy retains the requirement of the 90-day plaintext retention policy, it is likely to be challenged in courts and liable to be struck down.
The Draft Policy of 2015 had other concerning provisions. It was withdrawn because it failed to account for the additional vulnerabilities that would arise as a result of centrally retaining the tools to decrypt information. It also ignored the fact that creation of ‘exceptional access’ for law enforcement would compromise forward secrecy. Dynamic standards (such as OpenSSL and Transport Layer Security) ensure forward secrecy through the use of a non-deterministic algorithm to generate a new set of public-private keys for each session. This set of public-private keys is used only to generate the session key for that particular session and is never used again. The loss or theft of one private key does not compromise information exchanged in a past or future transaction. If a universal decryption key is created for the government, then its accidental compromise would leave all past conversations on a certain platform open to whoever is in possession of the key.
The response to the Draft Policy, in no small part owed to its executive overreach, was largely reactionary and not constructive. Concerns regarding security are important as it affects not just the state but citizens and businesses that operate within it. The government needs access to information in order to be able to investigate and prosecute crime. It also needs to monitor information exchanges in a timely manner to thwart threats to national security. However, information gathering must align with the rights of the very citizens that the state is safeguarding. It is now well established within the technical community that the security provided by encryption is a prerequisite for the development of e-commerce and online banking. It is also a critical tool for investigative journalists and whistleblowers. Any policy that stops short of actively encouraging the adoption and proliferation of secure communications will hinder the growth of information and communication technologies. The new policy must, therefore, find a middle ground between the need to access data and the need to maintain security of ICT architecture.
The encryption policy is likely to have far-reaching effects. At a time when India is deliberating on a Constitutional recognition of the right to privacy and the data protection regulations have been found wanting in handling international data, the policy represents an opportunity to strengthen India’s information security landscape.
The crypto wars in India may have lessons to learn from the conflict between Silicon Valley and the United States government. Still, there are many unique considerations that Indian policymakers must keep in mind. India’s domestic legal system, after all, suffers from a lack of privacy legislation, inadequate data protection rules, and a surveillance regime that is, for the most part, guided by colonial legislation. How the country regulates encryption will have implications on rights, commerce and national security. It will need to harmonise the regulatory landscape so that the multifarious interests of various stakeholders are balanced.
There is no explicit Constitutional recognition of the right to privacy in India. Instead, it has emerged through a series of (often contradictory) pronouncements by Indian courts to gain recognition as a penumbral right under other fundamental freedoms. This position, however, is tenuous at best. The government, through the Attorney General, has claimed that there is no right to privacy available to Indian citizens. The Supreme Court of India, in 2015, convened a Constitution Bench to adjudicate upon the issue. The apex court is expected to finally rule on the contours of the right within the next year. In the meantime, traditional privacy-based arguments against decryption of information by the government are not as readily applicable.
This is further complicated by India’s surveillance regime which lacks safeguards in the form of judicial review. Interception of communications in India is authorised by an executive order under Section 5(2) of the Telegraph Act, 1885 and Section 69B of the Information Technology Act, 2000 (hereinafter IT Act). Orders of interception under Section 5(2) also follow improperly defined standards such as “on the occurrence of public emergency” or “expedient… in the interest of national security” as preconditions. Similarly, under Section 69B, the government can order collection of information from any computer resource to “enhance cyber security.” Without the guidance of a privacy law, orders for surveillance are left to the subjective determination of a non-judicial authority. These broad powers of interception can also include access to encrypted information.
The Data Protection Rules drafted under Section 87(2)(ob) of the IT Act classify passwords as “sensitive personal data or information”. Password, in turn, has been defined to include encryption and decryption key. However, the rules also mandate that a body corporate that collects this sensitive data will share it with a government agency upon receiving a request in writing. As a result, India’s data protection laws have faced criticism both at home and overseas. The European Union, for one, views Indian data protection regulation as being inadequate for European data. A recent survey by the Data Security Council of India (DSCI) estimates that this may have resulted in an opportunity loss of USD 2-2.5 billion.
Even technical standards that are available for data protection do not prescribe a high standard for encryption. Earlier, the licensing agreement between the Indian Department of Telecommunications and Internet Service Providers (ISPs) stipulated that no ISP would be permitted to use encryption standards higher than 40-bit symmetric keys. Any use of higher encryption would involve obtaining express approval from the government and the submission of decryption keys. The license agreement also prohibited the use of bulk encryption by ISPs. Curiously, the Unified Licensing Agreement that replaced the erstwhile service-specific licensing agreements dropped the upper limit mandating 40-bit encryption. It, however, retained the prohibition on bulk encryption and specified that the use of encryption by the ISP’s subscriber will be governed by a policy drafted under the Information Technology Act. The absence of the 40-bit standard has removed an upper ceiling on what is permissible encryption, but the rule has not been supplanted by any provision that clarifies the issue.
Taken together, the absence of a privacy law, excesses of surveillance powers, and the inadequacy of data protection norms create inconsistent policies that are not conducive to investments and growth in technology. The Draft Policy was a reflection of these inconsistencies.
Shortly after the release of the Draft Policy in 2015 the government issued a clarification that mass-market encryption products would be excluded from the ambit of the policy; that effectively excluded services like WhatsApp and standards like OpenSSL from the policy’s effects. It is unclear whether the second iteration of the encryption policy will apply to mass-market encryption tools. It, however, should. A “good” encryption policy can have the effect of harmonising the regulatory landscape around information security, in turn triggering changes to decades-old laws.
It is noteworthy that this time around, the Ministry of Electronics and Information Technology is seeking inputs from industry bodies and civil society while drafting the policy. This is an opportunity to avoid the same pitfalls that the Draft Policy suffered from. It is also a time to analyse and learn from other jurisdictions that have seen similar debates.
To truly emulate best-in-class security standards that encourage not only the entry of state-of-the-art communications providers but also the growth of competing domestic services, the policy must conform to the test of necessity and proportionality while setting decryption mandates. The UN Special Rapporteur for Freedom of Speech and Expression has urged state governments to not ban any comprehensive protections on encrypted services and to impose restrictions on a case-by-case basis. He has also urged them to resort to court orders for imposing specific limitations. India’s encryption policy must, however, go beyond merely setting decryption mandates. Rather, the policy must aim to:
The encryption policy that is drafted now is likely to set the market standards for the coming 25 years. In that time, it is hoped that the Indian market will have replaced foreign communication providers with those that are developed domestically. It will be essential to ensure that information belonging to Indian citizens is not compromised by foreign intelligence agencies and non-state actors. To that end, the policy must keep in mind safeguards such as the Roots of Trust standard proposed by National Institute of Standards and Technology and the guidelines suggested by the Reserve Bank of India. In addition to these, the Policy should strengthen the security of the internet ecosystem by ensuring the following:
Mass-market tools like email clients and web browsers are some of the most frequently used web-based services by internet users. They are also especially vulnerable to exploitation through man-in-the-middle attacks, where an attacker intercepts encrypted communications and is then able to decrypt it by stealing a private key. The encryption policy should mandate that all providers of mass-market services transition to secure encryption protocols such as SSL/TLS. This will involve generating a new set of encryption keys for every transaction and will ensure forward secrecy. Encrypting the transmission layer will ensure that even if a user is exploited, her past transactions would remain secure and the level of potential damage would be restricted.
Storing sensitive data on mobile computing devices, smartphones and portable storage drives presents an inherent vulnerability for data. Some of the more prominent data breaches and cyber-attacks in recent memory like the Office of Personnel Management hack and Stuxnet are said to have been made possible through unsecured end-devices. It is essential that the encryption policy address this lacuna in network security. The policy should mandate that employees of the government as well as the private sector, who handle sensitive data, use protective measures such as the RSA SecurID. The RSA SecurID is an authentication mechanism consisting of a hardware or software token that generates a random authentication code every 60 seconds. This helps protect data of employees that use personal/remote devices to connect to official networks. For devices such as smartphones that connect to a cloud, encryption helps protect data across devices in cases of theft. Most users do not modify the security configuration on their devices, preferring to keep the default configuration that the devices come with. Products like the Apple iPhone are considered more secure precisely because of their encrypted-by-default feature. In markets like India, where the proliferation of cheap smartphones poses a threat to network security, it would be prudent for the policy to recommend a shift towards a secure-by-design framework.
One of the core issues in the encryption debate has been about whether states should regulate the strength of communications encryption. This paper has earlier discussed the inadequacy of the 40-bit encryption key that the Indian laws seem to currently prescribe. As early as 1995, three different teams of researchers at the Massachusetts Institute of Technology, the École Polytechnique, and the RSA Data Security Conference have demonstrated that the 40-bit encryption can be broken with little effort. It is therefore critical that the encryption policy mandate a higher standard of security for encrypting communications. The Reserve Bank of India and SEBI have recommended that for financial transactions, 256-bit encryption should be made the default standard. Financial services, however, are not the only points of vulnerability. Many other government services such as the Railways and Aadhaar unique identity database also handle vast amounts of information and have been targets of cyber-attacks in the past. This is only likely to intensify in the future. The policy must therefore mandate that all government-to-government; government-to-business; government-to-citizen communications adopt a 256-bit encryption standard. This should also be made mandatory for industries identified as Critical Information Infrastructure. The policy, however, should not aim to set any limits on encryption of business-to-consumer communication—rather these standards should be allowed to develop organically.
The encryption policy must require that every vendor of an encryption product and every encryption service provider register with a designated authority within the Indian government. The information sought during registration should only include the name and address of the cryptography provider; a description of the encrypted product/service; the strength of encryption used; and a designated point of contact within the service provider for law enforcement assistance. This registration must be purely declaratory in nature and must not involve an agreement between the service provider and the government. This provision must also not mandate sharing of, or modification to the source code to allow exceptional access to the encrypted product/service. In order to expedite the registration process, the government must endeavour to respond to a registration request (to seeking clarifications if any) within 30 days of submission of the information. If the appropriate authority fails to respond within the 30 days then the registration process should be deemed to have been successful.
Requiring decryption of information involves a higher degree of intrusion than standard search and seizure of electronic documents. This is because encrypted information can generally be presumed to be sensitive material that the creator of the information would seek to protect. In this respect, India’s surveillance laws, that prescribe an administrative authorisation model for interception of communication, are inadequate. The encryption policy must mandate that every request for decryption of information be warranted by a judicial magistrate.
Even proponents of strong encryption acknowledge the legitimate need of law enforcement to access data. Key escrow systems and device backdoors are not considered viable solutions because they endanger data by centralising keys and weakening devices. There are, however, alternatives that the government could explore to ensure ‘lawful and secure’ access to encrypted data. Cryptographic envelopes are one such alternative. A cryptographic envelope utilises existing technology such as PGP to securely store the decryption key to a device. The envelope can only be opened by the party whose public key was used to encrypt it. In order to decentralise the decryption process, multiple envelopes can be used, one inside the other, each encrypted with a different entity’s public key. These entities could be the manufacturer of the device and a designated law enforcement agency. Both the envelopes would have to be opened individually by each entity, thus preventing the unilateral misuse of a decryption key either by the government or the industry. It would also protect the data in case one of the entities’ private key is compromised. Using envelope encryption in conjunction with a judicial warrant requirement for decryption would greatly increase transparency. Law enforcement agencies would have to exhibit the public safety imperative for seeking the decryption of communications; the judiciary would test the legal validity of the claim; and the manufacturer of the device would ascertain whether the decryption violates the user’s privacy. Through this approach, every request for decrypting communications would undergo three layers of scrutiny. It would also minimise the threat of non-state actors gaining access to encrypted communications by exploiting backdoors.
With a plurality of actors and interests involved, encryption is perhaps the most complex issue of this decade. It implicates rights, commerce, law enforcement and intelligence. India’s Draft Policy only addressed law enforcement concerns while failing to truly address the multifarious issues at play. Privacy activists and the ICT industry have long favoured stronger encryption standards. However, one important stakeholder that has not yet weighed in on the debate is the intelligence community. Not unlike law enforcement agencies, espionage organisations also prefer easy access to information. Unlike law enforcement agencies, however, intelligence organisations are mandated with maintaining ‘information assurance.’ This involves protecting domestic data from being intercepted and exploited by foreign intelligence agencies and non-state actors. The Snowden revelations made it clear that many foreign espionage organisations like the National Security Agency and GCHQ are ready and willing to leverage their superior technological capability to monitor the activities of even democratically elected governments. India’s dual-use cyber-tech is not at par with its western competitors. India is also not a part of cooperative espionage networks like the Five Eyes. In this backdrop, sophisticated encryption is the only viable option to keep data secure both within domestic borders and without.
With more countries choosing to reflexively regulate encryption, it is important that India take a considered approach. International trends have shown that countries with influential governments and with opaque intelligence services are disfavouring end-to-end encryption. These include countries like Russia, Pakistan, Kazakhstan, Colombia and China. There are two essential reasons why India should not follow their lead in setting down standards on encryption that are more restrictive than market standards.
First, unlike, say, a Chinese citizen, an Indian internet user is heavily reliant on the services of companies that are based abroad. If India imposes data disclosure requirements that are not compatible with their domestic standards then there is a likelihood that these requests will fail (as they do under a Mutual Legal Assistance Treaty.) Given the fact that the United States and Germany are the clear market leaders in the number of encryption products, Indian policymakers must keenly watch the approach that these governments take towards encrypted platforms.
Germany has already adopted a pro-encryption policy and the United States Government has declined to endorse the controversial Burr-Feinstein Bill that favoured strong restrictions on encryption. The United States government’s approach, however, is clear from the provisions of the Trans-Pacific Partnership (TPP) that it has spearheaded and endorsed. The TPP chapter on ‘Technical Barriers to Trade’ addresses cryptographic services head-on from a surveillance and access to decrypted data angle. The chapter does not prevent law enforcement agencies of member countries from legally demanding decrypted data from encryption service providers. In no uncertain terms, however, it restricts member countries from imposing onerous technical regulation on and demanding backdoors to encrypted products. It reads, “No Party shall impose or maintain a technical regulation or conformity assessment procedure that requires a manufacturer or supplier of the product… [to] transfer or provide access to a particular technology, production process or other information, for example, a private key or other secret parameter, algorithm specification or other design detail.” Critics have claimed, and reasonably so, that the provision is little more than lip service by the United States government to data integrity and that many other loopholes exist to force homegrown encryption providers to install backdoors. This, however, is a distinct advantage for countries that have an abundant number of domestic encryption services. This is a luxury that India cannot afford.
Second, as a corollary to India’s reliance on foreign encryption tools, it is important for the Indian industry to keep these services in circulation until such time that India’s domestic services are able to offer a similar standard of security. A restrictive encryption policy can cause the exit of state-of-the-art encryption services from the market. Consequently, the market that develops domestically will be technologically stunted from a lack of competition. In the long term, this will leave Indian data vulnerable to exploitation. In the short term, however, the exit of foreign encryption platforms from the Indian market would mean that even the metadata that assists law enforcement agencies would be lost. In essence, this would be a lose-all scenario for Indian law enforcement and intelligence agencies.
Whatever form the encryption policy finally takes, it must bear in mind the plurality of issues involved. It must address the needs of internet users, the ICT industry and the intelligence community in addition to law enforcement agencies. This will require a more direct engagement with multistakeholder platforms that discuss these issues. It must also follow technology neutrality by not treating services differently depending on their willingness to cooperate with law enforcement. Further, the policy must adopt principles that will stay relevant over the next few decades and are not rendered redundant by technology. Digital India’s increasing reliance on digital payments and the Aadhaar database means that the government will need to find technologically advanced ways to keep data safe. If India favours the adoption of technologies like Blockchain for this purpose, it will need to be enabled by a strong encryption policy. Lastly, it must look inward and help develop a domestic cryptography industry that over the next five years is not only able to compete with its global counterparts but be sought after worldwide.
The policy recommendations in this paper offer a starting point towards that goal. They will, however, require engagement with the technical community to create an encryption policy that is truly future-proof.
 Wilfred Diffie and Susan Landau, “Privacy on the Line: The Politics of Wiretapping and Encryption,” MIT Press, (Cambridge, 2007), p. 13
 Kaveh Waddell, “The Long and Winding History of Encryption,” The Atlantic, January 13, 2016, http://www.theatlantic.com/technology/archive/2016/01/the-long-and-winding-history-of-encryption/423726/.
 Nicolas Lidzborski, “Staying at the Forefront of Email Security and Reliability: HTTPS-Only and 99.978% Availability,” Official Gmail Blog, March 20, 2014, https://gmail.googleblog.com/2014/03/staying-at-forefront-of-email-security.html.
 Apple Inc. “iOS Security Whitepaper,” Apple Inc., May 2016 accessed November 10, 2016, https://www.apple.com/business/docs/iOS_Security_Guide.pdf.
 US Law enforcement, in the 1970s called for a ban on hard drive encryption of Microsoft
 Berkman Center for Internet & Society at Harvard University, “Don’t Panic: Making Progress on the “Going Dark” Debate,” Berkman Center for Internet & Society at Harvard University, February 1, 2016, https://cyber.harvard.edu/pubrelease/dont-panic/Dont_Panic_Making_Progress_on_Going_Dark_Debate.pdf.
 Steven Levy, “Why Are We Fighting the Crypto Wars Again?,” Backchannel, March 11, 2016, https://backchannel.com/why-are-we-fighting-the-crypto-wars-again-b5310a423295#.saxlftve3.
 Josh Horwitz, “After a Lengthy Battle, BlackBerry Will Finally Let the Indian Government Monitor Its Servers,” The Next Web, July 10, 2013, http://thenextweb.com/asia/2013/07/10/after-a-lengthy-battle-blackberry-will-finally-let-the-indian-government-monitor-its-servers/.
 IN THE MATTER OF THE SEARCH OF AN APPLE IPHONE SEIZED DURING THE EXECUTION OF A SEARCH WARRANT ON A BLACK LEXUS IS300, CALIFORNIA LICENSE PLATE 35KGD203
 Apple Inc.’S Reply To Government’s Opposition To Apple Inc.’S Motion To Vacate Order Compelling Apple Inc. To Assist Agents In Search IN THE MATTER OF THE SEARCH OF AN APPLE IPHONE SEIZED
DURING THE EXECUTION OF A SEARCH WARRANT ON A BLACK LEXUS IS300, CALIFORNIA LICENSE PLATE 35KGD203 https://www.eff.org/files/2016/03/15/apple-reply-to-govt-opposition-to-apple-motion-to-vacate.pdf
 Supplemental Declaration Of Nicola T. Hanna In Support Of Apple Inc.’S Reply To Government’s Opposition To Apple Inc.’S Motion To Vacate Order Compelling Apple Inc. To Assist Agents In Search IN THE MATTER OF THE SEARCH OF AN APPLE IPHONE SEIZED DURING THE EXECUTION OF A SEARCH WARRANT ON A BLACK LEXUS IS300, CALIFORNIA LICENSE PLATE 35KGD203 https://www.justsecurity.org/wp-content/uploads/2016/03/FBI-Apple-CDCal-Apple-Reply-Declarations.pdf
 Adam Matthews, “BlackBerry’s India Problem,” OPEN Magazine, April 30, 2011, http://www.openthemagazine.com/article/business/blackberry-s-india-problem.
 The Constitution of India, 1950, Article 20(3).
 M.P Sharma v. Satish Chandra, AIR 1954 SC 300.
 Internet Architecture Board, “IAB Statement on Internet Confidentiality, Internet Architecture Board, accessed November 10, 2016, https://www.iab.org/2014/11/14/iab-statement-on-internet-confidentiality/.
 Sidharth Pandey, “Is Privacy a Fundamental Right? Constitution Bench of Supreme Court to Decide,” NDTV, August 11, 2015, http://www.ndtv.com/india-news/is-privacy-a-fundamental-right-constitution-bench-of-supreme-court-to-decide-1206100.
 CRID – University of Namur, “First Analysis of the Personal Data protection Law in India,” Directorate General Justice, Freedom and Security, European Commission, Accessed November 10, 2011 http://ec.europa.eu/justice/data-protection/document/studies/files/final_report_india_en.pdf.
 Press Trust of India, “‘Right to Privacy Not a Fundamental Right’: Centre Tells Supreme Court,” NDTV, July 3, 2015, http://www.ndtv.com/india-news/right-to-privacy-not-a-fundamental-right-centre-tells-supreme-court-784294.
 Sidharth Pandey, “Is Privacy a Fundamental Right? Constitution Bench of Supreme Court to Decide,” NDTV, August 11, 2015, http://www.ndtv.com/india-news/is-privacy-a-fundamental-right-constitution-bench-of-supreme-court-to-decide-1206100.
 Information Technology Act, 2000, §69(B)
 Bedavyasa Mohanty, “Inside the Machine: Constitutionality of India’s Surveillance Apparatus,” IJLT Issue 12 (To be published)
 Information Technology (Reasonable Security Practices and Procedures and Sensitive Personal Data or Information) Rules, 2011, Rule 3(i)
 Information Technology (Reasonable Security Practices and Procedures and Sensitive Personal Data or Information) Rules, 2011, Rule 2(1)(h)
 Information Technology (Reasonable Security Practices and Procedures and Sensitive Personal Data or Information) Rules, 2011, Rule 6(1)
 See, Bhairav Acharya, “Comments on the Information Technology (Reasonable Security Practices and Procedures and Sensitive Personal Data or Information) Rules, 2011”, Centre for Internet and Society, March 31, 2013, http://cis-india.org/internet-governance/blog/comments-on-the-it-reasonable-security-practices-and-procedures-and-sensitive-personal-data-or-information-rules-2011
 NASSCOM, “NASSCOM Update on EU Data Protection Regime,” NASSCOM http://www.nasscom.in/sites/default/files/policy_update/EU%20data%20Protection%20Regulation.pdf
 Department of Telecommunications, “Licence Agreement For Provision Of Internet Service (Including Internet Telephony) Amendments,” Department of Telecommunications, Ministry of Communications, Government of India, accessed November 11, 2016, Clause 1.10.1, http://dot.gov.in/granted-issue-guidelines.
 Department of Telecommunications, “License Agreement For Unified License,” Department of Telecommunications, Ministry of Communications, Government of India, January 8, 2014, Clause 37(1), http://www.dot.gov.in/sites/default/files/Amended%20UL%20Agreement_0_1.pdf?download=1.
 David Kaye, “Report on Encryption, Anonymity, and the Human Rights Framework,” Report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, A/HRC/29/32 May 22, 2015, accessed November 10, 2016, http://www.ohchr.org/EN/Issues/FreedomOpinion/Pages/CallForSubmission.aspx.
 Andrew Regenscheid, “Roots of Trust in Mobile Devices” National Institute of Standards and Technology, February 2012, http://csrc.nist.gov/groups/SMA/ispab/documents/minutes/2012-02/feb1_mobility-roots-of-trust_regenscheid.pdf.
 Reserve Bank of India, Report and Recommendations of the Working Group on Information Security, Electronic Banking, Technology Risk Management and Cyber Frauds(2011) available at http://cab.org.in/IT%20Documents/WREB210111.pdf.
 Berkman Center for Internet & Society at Harvard University, “Don’t Panic: Making Progress on the “Going Dark” Debate,” Berkman Center for Internet & Society at Harvard University, February 1, 2016, https://cyber.harvard.edu/pubrelease/dont-panic/Dont_Panic_Making_Progress_on_Going_Dark_Debate.pdf.
 Wilfred Diffie and Susan Landau, “Privacy on the Line: The Politics of Wiretapping and Encryption,” MIT Press, (Cambridge, 2007)
 Reserve Bank of India, Report and Recommendations of the Working Group on Information Security, Electronic Banking, Technology Risk Management and Cyber Frauds(2011) available at http://cab.org.in/IT%20Documents/WREB210111.pdf
 Securities and Exchange Board of India, Report of the Committee on Internet-Based Securities, Trading and Services(2000) available at http://220.127.116.11/RRCD/oDoc/29-nettrading_200059.pdf(Accessed September 3, 2016)
 Matt Tait, “An Approach to James Comey’s Technical Challenge,” Lawfare, April 27, 2016, https://www.lawfareblog.com/approach-james-comeys-technical-challenge.
 Ashley Deeks, “The International Legal Dynamics Of Encryption, ”Hoover Institution, Series Paper no. 1609, accessed November 10, 2016, http://www.hoover.org/research/international-legal-dynamics-encryption.
 Human Rights Watch, “Russia: ‘Big Brother’ Law Harms Security, Rights,” Human Rights Watch, July 12, 2016, https://www.hrw.org/news/2016/07/12/russia-big-brother-law-harms-security-rights.
 Monitoring and Reconciliation of International Telephone Traffic Regulations, 2010, Regulation 5(6), http://www.pta.gov.pk/media/monitoring_telephony_traffic_reg_070510.pdf.
 Kaveh Waddell, “Kazakhstan’s New Encryption Law Could Be a Preview of U.S. Policy,” The Atlantic, December 8, 2015, http://www.theatlantic.com/technology/archive/2015/12/kazakhstans-new-encryption-law-could-be-a-preview-of-us-policy/419250/.
 Digital Rights LAC, “The Dangerous Ambiguity of Communications Encryption Rules in Colombia” Digital Rights Latin America and the Caribbean, January 30, 2015, http://www.digitalrightslac.net/en/la-peligrosa-ambiguedad-de-las-normas-sobre-cifrado-de-comunicaciones-en-colombia/.
 Emily Rauhala, “China Passes Sweeping Anti-Terrorism Law With Tighter Grip on Data Flow,” Washington Post, December 28, 2015, https://www.washingtonpost.com/world/china-passes-sweeping-anti-terrorism-law-with.
 Bruce Schneier, Kathleen Seidel, and Saranya Vijayakumar, “A Worldwide Survey of Encryption Products,” Berkman Klein Centre for Internet and Society at Harvard University, February 11, 2016, https://cyber.harvard.edu/publications/2016/encryption_survey.
 Thorsten Benner and Mirko Hohmann, “The Encryption Debate We Need,” Global Public Policy Institute, May 19, 2016, http://www.gppi.net/publications/global-internet-politics/article/the-encryption-debate-we-need/.
 Mark Hosenball and Dustin Volz, “Exclusive: White House Declines to Support Encryption Legislation – Sources,” Reuters, April 7, 2016, http://www.reuters.com/article/us-apple-encryption-legislation-idUSKCN0X32M4.
 Trans-Pacific Partnership, Technical Barriers to Trade, Paragraph 5, Section A, Annexure 8-B, https://ustr.gov/sites/default/files/TPP-Final-Text-Technical-Barriers-to-Trade.pdf
 Trans-Pacific Partnership, Technical Barriers to Trade, Paragraph 3, Section A, Annexure 8-B, https://ustr.gov/sites/default/files/TPP-Final-Text-Technical-Barriers-to-Trade.pdf
 Jeremy Malcolm, “Has the TPP Ended the Crypto Wars? Hardly.,” Electronic Frontier Foundation, November 18, 2015, https://www.eff.org/deeplinks/2015/11/has-tpp-ended-crypto-wars.
 A blockchain is a peer-to-peer shared digital ledger that maintains records of transactions. Every participant has an identical copy of the ledger. Any change to a ledger is automatically disseminated to all other copies of the ledger. This ensures that all users have an up-to-date copy. Access to the ledgers are controlled through digital signatures and cryptographic authentication. Blockchains form the core underlying technology of Bitcoins and are what make it so secure.
This article originally appeared in ORF Occasional Paper 102&p[url]=http://www.digitalpolicy.org/going-dark-in-india-the-legal-and-security-dimensions-of-encryption/&&p[images]=http://www.digitalpolicy.org/wp-content/uploads/2017/02/Analysis_Encryption_BedavyasaMohanty-1-1-150x150.jpg" target="_blank">
Abstract Encrypting communications enhances privacy and the security of information services. This, in turn, incentivises innovation in the ICT sector and contributes significantly to the growth of the internet economy. India’s (now withdrawn) Draft National Encryption Policy was single-minded in its approach. It sought only to prescribe standards that would enable law enforcement agencies to access […]
The National Critical Information Infrastructure Protection Centre (NCIIPC) was deemed to be created by a gazette notification with specific responsibilities for protecting all CII. The Computer Emergency Response Team – India (CERT-IN) would be responsible for all non-critical systems, but would continue to be responsible for collecting reports on all cyber attacks / incidents. While the law was amended in 2008, it would take six years before NCIIPC was formally created through a Government of India gazette notification in January 2014.
The NCIIPC started off with several sectors, but has now truncated them into five broad areas that cover the ‘critical sectors’. These are:
While defence and intelligence agencies have also been included under the CII framework, these have been kept out of the purview of the NCIIPC’s charter. Instead, the Defence Research and Development Organisation (DRDO) has been tasked with protecting these bodies.
A key point that has been factored in while identifying CII is the inter-dependencies that they have, to determine which are the ‘most critical’. Therefore, using this matrix, NCIIPC settled on the Power Sector as the most critical followed by the Energy Sector. However, these inter-dependencies are likely to change and could evolve into a more complex model at a later stage to decide the criticality of systems.
However, NCIIPC has also been mindful of the fact that even though some systems are isolated, the accelerated developments of the IT sector and the advent of Internet of Things (IOT) will increase the complexity of protecting CII. NCIIPC’s guidelines states “Presently many of these critical systems may relatively be isolated or the complementarities may be progressing at a snail’s pace and thus considered relatively secure from intrusion. However, with the accelerated pace of development within the IT sector it will be difficult for these critical systems to isolate themselves from the outside world, and to maintain the boundaries between “inside” and “outside”.
Over time, NCIIPC has been able to sharpen its charter to ensure better “coherence” across the government to respond to cyber threats against CII. This also means that it will provide the strategic leadership to the government’s efforts to “reduce vulnerabilities…against cyber terrorism, cyber warfare and other threats”. This also includes identification of all CII systems for “approval by the appropriate government for notifying them” as “protected systems”. This is a critical element in NCIIPC’s charter and helps it embrace the private sector and work with them.
Under its charter, NCIIPC has been working towards recognizing many of the Government of India’s systems as ‘protected systems’, which has several positive consequences. Under the current laws, any IT (Information Technology) or Supervisory Control and Data Acquisition (SCADA) systems that lie at the heart of the CII can only seek three years imprisonment for any cyber attack. However, after the NCIIPC has undertaken an elaborate Vulnerability, Threat and Risk (VTR) assessment, the system is forwarded for notification by the “appropriate government authority.”
Once notified as a “protected system” the CII is immediately placed under the ambit of section 66 (F) of the IT Act (Amended) 2008, which defines any cyber attack as an act of Cyber terrorism. This increases the quantum of punishment from three years imprisonment to life imprisonment, increasing the deterrence levels of attacking CII. Furthermore, it also ensures that NCIIPC is able to offer its services to a post-incident risk mitigation as well as investigation process. As per the existing protocol, the Chief Information Security Officer (CISO) of the designated CII entity is also given access to the intelligence on cyber threats and vulnerabilities gathered by NCIIPC.
The agency has also started approaching various sectors to create guidelines that can set standards for private and public sector entities across the board. To achieve this, NCIIIPC began a process of interfacing with various stakeholders in several sectors to understand their IT and SCADA systems, along with normative practices such as vendor selection, patch management, legal contracts, etc that are particular to a given sector. Working with these stakeholders, NCIIPC managed to create the first sector-specific draft guidelines of the Power sector, which was submitted to the Ministry of Power in May 2016. If accepted, this will be the first set of national sector-specific guidelines to be promulgated by the Government of India.
NCIIPC has also been instrumental in declaring two major entities as protected – systems of the Aadhar unique identification project and the Long Range Identification and Tracking (LRIT) system of the Ministry of Shipping.
It has been frequently noticed that any possible interface between the private sector and the government is usually fraught with risk. The government is essentially a regulator while the private sector seeks freedom to conduct business. Any interference by the government not only threatens its profitability, but can also prove to be an existential threat. This is a framework that NCIIPC has consciously chosen to not follow.
Its approach is based on the principle that cyber security is a shared responsibility. NCIIPC’s charter includes its role to “…coordinate, share, monitor, collect, analyse and forecast, national level threat to CII for policy guidance, expertise sharing and situational awareness for early warning or alerts”. However, it also maintains that “the basic responsibility for protecting CII system shall lie with the agency running that CII”.
The role of the entity holding the CII is clear and NCIIPC aims to strengthen the agency that runs the CII systems. To achieve this, it has embarked on a formal private sector interface that will establish joint partnerships to increase awareness on the kinds of threats that the CII owners are likely to face in the coming years. As a case in point, its close cooperation with a private power sector company was used as a base for drafting the national guidelines for the sector. This has also sensitised NCIIPC to the challenges that the private sector faces, in terms of alignment with the management as well as budgetary support for acquiring the latest counter-measures against future cyber threats / attacks.
The Snowden revelations has revealed that as long as propriety software created by developed economies dominate the cyber landscape, systems will remain extremely vulnerable. This has prompted an initiative to ensure that India develops an eco-system that can support the development of indigenous software and hardware.
However, that eco-system is incomplete unless there are adequate cybersecurity professionals available to partner with NCIIPC to cover the whole sector. This calls for forging partnerships between public and the private entities, leveraging each other’s strengths by avoiding the traditional regulatory approach. While section 70 (A) and its sub clauses empower NCIIPC to take the regulatory route, it has drawn more on the US Critical Infrastructure Information Act 2002, that emphasises ‘voluntary’ cooperation rather than enforcement and compliance-driven. This has created a cooperative framework that has served the US well and continues to strengthen its CII’s cyber security. This ensures the merging of the strengths of the private and public to not only create standardised operating procedures, but also build a eco-system that is sensitive to each other’s lacunae and strengths.
(This essay originally appeared in the third volume of Digital Debates: The CyFy Journal)
 Section 70, information technology Act, 2000
 Department of Electronics and Information Technology, Notification No. 9(16)/2004-EC http://meity.gov.in/sites/upload_files/dit/files/S_O_18(E).pdf
 Guidelines for protection of CII Version 1.0, June 2013
 National Critical Information Infrastructure Protection Centre, Functions and Duties, https://nciipc.gov.in/?p=function (Accessed September 1, 2016)
 The appropriate government authority can be the federal or the state government, depending on the location of the CII. So far the only two systems identified by NCIIPC as CII has been notified by the federal government. NCIIPC is examining the efficacy of notifying CII through state governments, where appropriate.
 The notification for both these systems were notified by the Government of India earlier this year
 As articulated in the Functions and Duties of NCIIPC, Supra Note 5
 Title II—Information Analysis And Infrastructure Protection https://www.dhs.gov/sites/default/files/publications/CII-Act_508.pdf (Accessed September 1, 2016)&p[url]=http://www.digitalpolicy.org/nciipc-evolving-framework/&&p[images]=http://www.digitalpolicy.org/wp-content/uploads/2016/10/eye-1431368_1920-2-150x150.jpg" target="_blank">
Only eight years after India passed the Information Technology Act, did the term cybersecurity appear in a statute through a series of amendments to the Act approved by the Indian Parliament. In 2008, the amendments recognised the need for a focussed approach to cybersecurity and divided it into two segments: Critical and Non Critical. The […]
Debates around internet policy have taken centre stage in domestic politics and international relations alike. While national debates are shaped by local priorities, politics and contextual ambitions, cyber diplomacy differs from traditional diplomacy in two important respects. First, the stakeholders invested in internet policy include not just states and governments but industry and civil society […]
What is significant about this? The top level domain (TLD) name (.clinic) is precise and resonates.
A company that makes smart earplugs using cutting-edge research and high-end technology has the following website address: http://hush.technology. It is clear and focuses on its USP – technology. Similarly, the website address of one of the world’s largest banks with operations in 75 countries and nearly 190,000 employees says: https://group.bnpparibas. The company name itself that is its website address. No confusing nomenclatures signifying whether it is a company or an organisation. One of the world’s best-known brands of yogurt and other dairy delicacies has its American website, http://usa.fage. The brand Fage is the address.
To illustrate the significance of this emerging new internet naming space, here is an interesting story.
On April Fools’ Day in 2015, a curious scene welcomed visitors to the world’s largest search engine. Millions of internet users were stunned when they logged in to Google to search for their daily dose of information, data or horror news.
Google had an inverted image on its page:
Confused users blamed their computers or smartphones or maybe their browsers. Some may even have kicked the PC.
Some news stories tried to explain it in the following way.
And funnily, Google had replaced its standard website URL (google.com) with a mirror image (com.google)—using their new ‘dotBrand’ address.
This marked a watershed moment on the internet. For the last letters after the dot – the TLD – was no longer the staid, old names.
What did this signify? That henceforth, the website address could be precise and unrestricted: it could be a brand; a name, a corporate, service or industry (clinic/hospital/club); or just a noun (shop, shoes).
For over 30 years just a handful of TLDs defined how content could be indexed, stored and found. The addresses were generic in nature and users had to adapt. There were no alternatives.
The early ad dresses were TLDs like .com, .org, .edu and .mil. In 2001 TLDs were added to include.info, .pro, .aero, .coop, etc. This had to be expanded to accommodate growing user requirements in 2004 with TLDs like .post, .mobi, .asia, etc.
However, mobile networks and an explosion of content caused an unprecedented increase in use of the internet. The proliferation of websites (over 1.3 billion websites at last count) created a tsunami of information. But the addresses remained the same.
While content, creativity and use cases multiplied — from music to finance, and from entertainment to business – the manner in which online presence could be accessed was limited. Billions of users searched for information using the same naming conventions or website addresses – obviously this was a constraint and limitation in the addressing space that was no longer capable of meeting the needs of the new-age internet with its plethora of content, applications and new consumers.
It was in this background of rapid change that the Internet Corporation of Assigned Names and Numbers (ICANN), which oversees and brings together the communities that define policies and technology standards on the internet, decided to allow thousands of new generic top level domains (ngTLDs) with no restriction on size, type or category and need of users. In 2012, over 1,930 applications were received for new TLDs – covering generic, daily-use words (vote, organic, shop, photography, for example); geographical names (such as London, Istanbul, Las Vegas, Melbourne); communities (hotels, banks, accountants); and, of course, brands and company names (Audi, Dabur, BNPParibas).
After a detailed evaluation and approval process, agreements were signed and the first of these ngTLDs were rolled out in early 2014.
This is fairly rapid. The trickle of initial new names has now become a torrent. And the new naming system has a huge advantage over what was available in the past for many reasons and it is now changing the entire fabric of the internet.
For one, it is redefining the entrenched status quo. And in keeping with the changing nature of the internet – home for new business models, mobile-centric startups and a hub for huge financial transactions daily – the domain addresses can be precise and focused:
The dotBrand domains have been especially successful of all the new gTLDs in terms of the value they deliver: trust and confidence.
For example, it is estimated that some 2.5 million false bookings every year are reported from travel sites 1. This is a huge problem for the hotels, tourism and travel sites. In fact there have been dozens of instances where travelers have been duped by fake sites mimicking the world’s largest home-sharing site.
Marriott is among the leaders that is changing this. By using .marriott, its customers can be assured of not being cheated and they can make bookings on this site without fear or worry.
It is now becoming clear that this segment has wrought one of the biggest and most fundamental changes in digital presence, by enabling brands with guaranteed security in online presence.
The brand TLD is not just a domain name or a new way to set up a website. It is a digital identity platform. Brands are creating authentic and secure ecosystems leveraging their new digital presence and technical and messaging capabilities around this.
A dotBrand owner controls 100 percent of who is allowed to get domains within their TLD and fully controls how these sites are used. This is no longer a generic name like .com where anyone can squat on any trademark or brand name and become a source of worry for the real brand or business owner. Entire communities of channel partners and customers converge around a unique brand messaging platform using the brands TLD.
If brand-owners do not like something that is posted on their dotBrand, they can take it down. This removes the hassles of litigation under the defined dispute resolution policies — the brand itself is in complete control.
Here is what Canon said when it launched its http://global.canon site – a visual and colourful feast of a website if there was one:
Because “.canon” can only be used by Canon Group companies and services, visitors to sites that use the new TLD can easily confirm their authenticity and be assured that the information they contain is reliable. Additionally, by leveraging the simplicity of the TLD, which is easy to remember and easy to understand, Canon aims to enhance the Company’s global brand value.
The biggest benefit of dot brand is to the end user / customer. The brand’s customers can be confident that when they visit a site on a dotBrand TLD, they are in absolutely the right place — not inside the web of some cybersquatter or hijacker with a pseudo-site.
This security factor is comforting. It gives customers a safe website that is easy to remember – just by the brand name. It offers real security to online customers. Apart from brand protection online it is the main reason why a large number of financial organisations and banks — including many of India’s top most banks — are slowly moving in this direction. Some launches are expected soon.
The dotBrand is the best defence against counterfeiting. A bank’s customers can be clearly told that if they go to a .com or some other TLD, there are no guarantees; but if they go to a dotBrand then that is always the real deal – the only address for that brand. Customers cannot be ripped off.
In fact, the Google incident on April Fool’s Day of 2015 served as a wake-up call for all in the internet addressing space. Google itself transformed into the Alphabet company and renamed its website: https://abc.xyz. A new corporate identity enabled by the domain .xyz – which has over 6.5 million registrations of its domains (see graph below). In all, 1179 new TLDs have been launched since the Google trick last year and 23.9 million new domains have been registered.
TLD WISE SHARE:
Large brands are adopting both generic domains and their own brand domains, thus setting a trend. For example, the largest consumer electronics show (CES) in the globe – which prides on showcasing latest technology marvels moved its website to: http://ces.tech. Canon has shifted its branding strategy to a brilliant site global.canon. Audi launched one of its high-end machines recently and shifted social media conversations to twitter.audi – which resolved on its official Twitter handle – and created a huge buzz. Nike, BMW, Apple, and Dabur, meanwhile, are all deploying their brand TLDs in different ways and moving towards using their own TLDs. Using the .brand TLD in email communications, for example, is something on which Dabur has successfully been a pioneer. It has shifted the entire corporate email addresses to mail.dabur.
The Twentieth Century Fox Film Corporation launched a new media hosting product on its .FOX brand domain: mediacloud.fox
Also, Abbott, one of the world’s largest pharamceutical companies, has a new corporate site http://www.lifetothefullest.abbott. Thus, from finance (.financial, .loans) .to real estate (.rentals, .market) and even adult content (.porn, .adult)—the Internet naming space has moved into a new orbit, one that is reshaping the old order and it has enabled the TLD conventions of the past to move to a better, technologically superior plane. New TLDs have started to rapidly change the old world order and norms of online presence. Internet addresses have become more personal, personable and reliable, and this trend will only accelerate in the future.
By on 12 September, 2016
A specialist neurology clinic in Japan has this revealing signage with its website address prominently proclaiming its function (photo courtesy: #DomainsInTheWild). What is significant about this? The top level domain (TLD) name (.clinic) is precise and resonates. A company that makes smart earplugs using cutting-edge research and high-end technology has the following website address: http://hush.technology. It […]
This article scrutinises the publicly available portions of the TPP, TISA and the RCEP that deal with IP and digital economy. It examines the current Indian legal standard for compliance with the proposed treaty provisions. For most of the IP section, the assumption being made is that the current Indian standard is TRIPS-compliant (e.g., with respect to data exclusivity) and therefore, the TRIPS standard is used interchangeably with the current Indian position. These agreements portend new international obligations for India, including those it has already foreseen and complied with (such as the legal backing for Technological Protection Measures) as well as those that would require a modification of the prevailing standard.
|TPP||TISA||RCEP||TRIPS/Current Indian Standard|
|Zero customs duty on digital products||Similar to TPP||–||TPP-compliant|
|Non-discrimination for foreign digital products||Government procurements exempt||–||None, as far as government procurement of software is concerned|
|Personal information protection||Similar to TPP||–||TPP-compliant|
|Cross-border information flow||Similar to TPP, unless the Hong Kong proposal is accepted by negotiators||–||Possibly non-compliant with TPP, depending on an interpretation of “legitimate public policy objective” in TPP Art. 14.11.3 and TISA E-commerce Annex Draft Art. 2.4. The Indian law requires the data recipient to comply with Indian data protection standards as a bare minimum.|
|Prohibition on server localisation||Similar to TPP, may be more expansive if “or investing in its territory” is accepted by negotiators||–||None|
|–||–||Commitment to accede to the WCT/WPPT/Beijing Treaty/UPOV||N/A|
|Non-visual trade marks||–||Similar to TPP, with opposition for scent marks||TPP-compliant in respect of sound marks, possibly non-compliant in respect of smell marks. RCEP-compliant if the Indian opposition is accepted by negotiators.|
|No rights for prior users of registered marks||–||Prior users protected||TPP non-compliant; RCEP-compliant.|
|Expansive definition of patentable subject matter||–||Similar to TPP, but may have changed in a more recent draft||TPP non-compliant – specifically with respect to evergreening and software patents.|
|Patent term adjustment for processing delays, and specific term adjustment for pharmaceutical marketing approval||–||Similar to TPP||TPP/RCEP non-compliant|
|10 year data exclusivity for agrochemicals||–||–||TPP non-compliant|
|5 year data exclusivity for drugs||–||Identical to TPP||TPP non-compliant|
|8 year data exclusivity for biologics||–||–||TPP non-compliant|
|Patent linkage||–||–||TPP non-compliant|
|Life plus 70 copyright term||–||–||TPP non-compliant|
|Anti-circumvention measures||–||Similar to TPP||Possibly TPP non-compliant – mens rea requirement.|
|Presumption in favour of patent/trademark/copyright validity||–||–||Possibly TPP non-compliant, especially with respect to patents.|
|Criminal sanctions for trade secret misappropriation||–||–||TPP non-compliant.|
There are some concerning aspects in these treaties as they stand today, in terms of the obligations that India would need to accept by signing up. A preliminary question that must be addressed is this: Why are the implications of FTAs of a concern, when they are agreements (barring the RCEP) that India is not even a party to?
The answer is three-fold. First, there is a growing perception that international standard-setting, at least in the IP space, is increasingly moving away from traditional multilateral regimes such as the TRIPS and entering the turf of treaties styled as investment or FTAs. A passive approach to FTAs (especially those that bind significant portions of the world’s economic muscle) is today equivalent to a passive approach to international IP norm-setting. Line-drawing exercises in international law inherently entail distributive effects and are, therefore, bound to benefit the interests of participants in these processes. Second, accession to these FTAs is never out of the question. Nor is the possibility that parties to the existing FTAs may require equivalent commitments in bilateral negotiations with non-parties such as India. It would be extremely convenient, for instance, for the US to ask India to accept a data exclusivity commitment in a hypothetical future India-US FTA simply by pointing out that a similar provision exists in the TPP, which regulates trade between the US and many other Asian countries. Third, even if the possibility of India’s passive acceptance of these standards was not worrying enough, this risk of being forced to accept TPP-equivalent standards as a precondition in bilateral negotiations is much higher for still poorer countries which need the TRIPS flexibilities more than us.
Of the treaties surveyed, the only one whose final text (pertaining to IP and e-commerce, the focus of this piece) is available in the public domain is the TPP. A draft leaked in October 2015 is the latest text available for the RCEP, while the latest TISA documents were leaked in May 2016. The RCEP does not propose to legislate on e-commerce, meaning that only its effects on IP norms can be measured; the TISA’s IP chapter is yet to be leaked, meaning that only its effects on e-commerce can be examined. This article will examine the impact of the TPP and (where feasible) RCEP and the TISA on IP and the digital economy. Only those instances where these agreements create significant new international obligations for India, if we were to accede to them, are highlighted.
While an important element of the TPP’s Digital Two Dozen (D2D) core principles is the removal of customs duties on all digital products (Art. 14.3), this does not create any new international obligation for India. This is because India is a party to the 1996 WTO Information Technology Agreement, which appears to already mandate this. The TISA contains a provision (Draft Art. 10 of the E-commerce Annex) that mirrors the TPP standard.
The TPP also requires, in Art. 14.4, that parties practice non-discriminatory treatment of digital products originating from any other party. Draft Art. 1.6 of the TISA’s E-commerce Annex contains a non-discrimination provision, but the crucial difference between the two obligations is while the TISA appears to provide flexibility for government procurement, the TPP limits exceptions to Art. 14.4 to subsidies, grants and other government-sponsored benefits. To the best of the author’s knowledge, India has no explicit non-discrimination legislation but such a law would be redundant since India would be compliant with Art. 14.4 by virtue of refraining from treating foreign digital products discriminatorily. In addition, government policies such as preferential adoption of the Bharat Operating System Solutions (BOSS) Linux distro may fall foul of the TPP standard, given that mere adoption would not amount to a grant or subsidy.
Article 14.8 of the TPP and Art. 4 of the TISA draft both mandate the protection of personal information online. While it can be argued that these legal standards can be independently imposed on India through instruments such as the International Covenant on Civil and Political Rights, these will represent the first time India accepts a concrete international obligation to actualise the right to privacy over the internet.
Article 14.11 of the TPP mandates the free flow of information across national borders. This provision may have huge implications for businesses that base their revenue generation on big data and targeted advertising, and the thrust of Art. 14.11.2 is that parties must allow businesses to transfer information (including personal information) across borders through electronic means. However, Art. 14.11 is subject to two significant carve-outs. First, Art. 14.11.2 is only applicable to ‘covered persons’, a term that has been defined to exclude financial service providers. Second, Art. 14.11.3 permits derogation from the free flow obligation to achieve “legitimate public policy objectives”, provided that measures derogating from the obligation are non-discriminatory and least-restrictive in nature. The TISA draft, in Art. 2.1, contains a similar provision to Art. 14.11 of the TPP. However, it is unclear whether the TISA definition of ’service supplier’ mirrors the TPP definition of ’covered person’. If it is, then financial service providers will be similarly excluded from the TISA obligation. Otherwise, the TISA obligation would be wider than the one in Art. 14.11 of the TPP.
In the latest TISA draft, Hong Kong has proposed a lengthy prefix to Art. 2.1 of the E-commerce Annex which exempts all measures taken by parties to protect the privacy of individuals from the cross-border data flow obligation. Indian compliance with the TPP obligation is doubtful, and would turn on an interpretation of the words “legitimate public policy objective” in Art. 14.11.3. This is because under Rule 7 of the Information Technology (Reasonable security practices and procedures and sensitive personal data or information) Rules 2011, cross-border flow of personal information is permitted only in situations where the recipient of the information complies with Indian data protection standards as a bare minimum. Similarly, India would almost certainly be non-compliant with the TISA draft unless the Hong Kong proposal is incorporated into the final text of the agreement.
Article 14.13 of the TPP contains a prohibition on parties against conditioning the conduct of business in their territory on the localisation of computing facilities such as servers and storage devices. This requirement is conditioned by the same carve-outs as Art. 14.11, with Art. 14.13.3 being identical to Art. 14.11.3. In addition, Art. 14.13 also applies only to ’covered persons’ and excludes financial institutions from its protection. The corresponding TISA draft article, Art. 8, contains a similar obligation. However, the TISA obligation may end up becoming significantly broader than the TPP provision, since only Colombia currently opposes a draft that expands the prohibition on server localisation to investment in the country’s territory by a covered business. Further, the difference between the TPP’s ‘covered person’ and the TISA’s ‘service supplier’ may mean that the latter is nevertheless broader than the former. There is no Indian measure that contravenes the prohibition on server localisation.
One of the more troubling features of the RCEP is Draft Art. 1.7.6, which contains a commitment to accede or ratify a number of TRIPS+ agreements, such as the WIPO Copyright Treaty (WCT), the WIPO Performances and Phonograms Treaty, the Beijing Treaty on Audiovisual Performances and the International Convention for the Protection of New Varieties of Plants.
On trademark protection, the TPP’s Art. 18.18 specifically provides for non-visual marks such as sounds and scents to be given protection by parties. Draft Art. 3.1 of the RCEP contains a similar provision, but the portion dealing with scent marks has been met with significant opposition. India’s position on scent marks is unclear, but a reading of Section 2(1)(zb) of the Trade Marks Act 1999, alongside Rule 30 of the Trade Marks Rules 2002, seems to suggest that scent marks would not be permitted in the country. Sound marks have been granted in the past by the Indian registry.
Art. 18.20 of the TPP suggests that the exclusive protection granted to the owner of a registered trademark shall extend even against the rights of prior users of the mark. This would fly in the face of the statutory limitation in Section 34 of the Trade Marks Act 1999, which provides that owners of registered trademarks shall have no remedy against prior users. Draft Art. 3.6 of the RCEP unequivocally protects prior rights.
Art. 18.37 of the TPP has an expansive definition of patentable subject matter, which would have significant implications for Indian law. These include software patents, evergreening, patents on microorganisms, etc. Draft Art. 5.1 of the RCEP contains a similarly broad definition of patentable subject matter, but recent statements by Nirmala Sitharaman, Minister of State for Commerce & Industry, seem to indicate that Draft Art. 5.1 has been modified to address India’s concerns regarding evergreening.
Another worrying feature of the TPP is the provision of patent term adjustments for “unreasonable” processing delays at the patent office (Art. 18.46) and more specifically, for the time taken to obtain marketing approval for a new drug (Art. 18.48). Draft Art. 5.13 contains a provision for patent term restoration to compensate patentees for the non-working of patents pending marketing approval, but the latest draft records significant opposition to this clause. Draft Art. 5.13.3 also has a more general patent term restoration obligation for unreasonable delays at the patent prosecution stage attributable to the patent office.
The TPP provides for an aggressive data exclusivity regime, with 10 years for agrochemicals (Art. 18.47), five for new drugs (Art. 18.50) and eight for biologics (Art. 18.51). The RCEP, in Draft Art. 5.16, provides for five years of data exclusivity for new drugs. The TPP also contains a patent linkage regime, in Art. 18.53.
Art. 18.63 of the TPP mandates a ‘life plus 70’ copyright term, while the RCEP leaves this untouched.
Art. 18.68 of the TPP and Draft Art. 2.3 of the RCEP require parties to legislate anti-circumvention measures such as Digital Rights Management and Rights Management Information into place. While India has done this through the insertion of Sections 65A and 65B through the 2012 amendment to the Copyright Act 1957, it had no international obligation to do so since it is not a party to the WCT. This would, therefore, still amount to the acceptance of a new TRIPS+ international standard. In addition, the Section 65A contemplates mens rea on the part of the person attempting to circumvent TPMs, and for this reason, Indian law could still fall short of complying with the TPP and RCEP obligations.
Art. 18.72 of the TPP requires parties, in judicial proceedings for infringement, to presume the validity of the patent, trademark or copyright. This could pose significant problems for India, especially in patent litigation, since Indian jurisprudence advocates a relatively cautious approach to patent validity at the prima facie stage of infringement proceedings. In Bilcare v. Supreme Industries, the Delhi High Court has interpreted Section 13(4) of the Patents Act 1970 to arrive at the conclusion that young patents could not be presumed to be valid, and that interim injunctions could be denied merely on the strength of a challenge to the patent’s validity in court.
Finally, Art. 18.78 of the TPP also requires parties to enact criminal penalties for the misappropriation of trade secrets, a provision that has come under criticism for its lack of protection to whistle-blowers. In addition, given that Indian law does not provide for statutory protection of trade secrets, it would be unable to comply with this requirement as it stands today.
From this analysis, it is clear that the TPP, RCEP and the TISA will entail some amount of IP norm-setting that shifts the balance established in the TRIPS towards rights-holders. In addition, it is safe to say that the provisions of the TPP and the TISA will entail the loss of sovereign control over the internet, a tradeoff balanced by the lowering of entry barriers for Indian digital players to the world market. However, it bears noting that most provisions in the TPP are subject to a ‘ratchet clause’, meaning that the lowering of barriers is necessarily a one-way process. Within the IP space, it is surprising to note that in some instances, the RCEP draft imposes stricter obligations than the TPP. Less surprising is the fact that the TISA, in several cases, imposes stricter controls than the TPP on e-commerce and the free flow of information. This is because the TPP can be seen as a sort of lowest common denominator among the FTAs surveyed, while the TISA, which has the EU and the US as parties, understandably caters to developed country interests at a larger scale.
(This piece was earlier published on 16 August 2016 in ORF’s Digital Frontiers)
 Art. 14.3
 Draft Art. 10 of E-commerce Annex
 WTO Information Technology Agreement
 Art. 14.4
 Draft Art. 1.6 of E-commerce Annex
 Art. 14.8
 Draft Art. 4 of E-commerce Annex
 Information Technology (Reasonable security practices and procedures and sensitive personal data or information) Rules 2011
 Art. 14.11
 Draft Art. 2.1 of E-commerce Annex
 R. 7, Information Technology (Reasonable security practices and procedures and sensitive personal data or information) Rules 2011
 Art. 14.13
 Draft Art. 8
 Draft Art. 1.7.6
 Art. 18.18
 Draft Art. 3.1
 Section 2(1)(zb) of the Trade Marks Act, 1999, read with R. 25, 28 and 30 of the TM Rules, 2002
 Art. 18.20
 Draft Art. 3.6
 Section 34 of the TM Act, 1999
 Art. 18.37
 Draft Art. 5.1
 Section 3(d) and 3(k) of the Patents Act, 1970.
 Art. 18.46
 Art. 18.48
 Draft Art. 5.13
 Art. 18.47
 Art. 18.50
 Draft Art. 5.16
 Art. 18.51
 Art. 18.53
 Art. 18.63
 Ss. 22 and 23 of the Copyright Act, 1957
 Art. 18.68
 Draft Art. 2.3 (Alt 1 is IN proposal)
 Section 65A of the Copyright Act, 1957
 Art. 18.72
Bilcare v. Supreme Industries, 2007 (34) PTC 444 Del.
 Art. 18.78&p[url]=http://www.digitalpolicy.org/an-assessment-of-the-consequences-of-trips-ftas-for-india-tpp-tisa-and-rcep/&&p[images]=http://www.digitalpolicy.org/wp-content/uploads/2016/08/amber-150x150.jpg" target="_blank">
By Balaji Subramanian on 29 August, 2016
Introduction The world has been seeing a proliferation of bilateral and regional free trade agreements (FTAs) which appear to be systemically exporting standards of intellectual property (IP) protection that extend well beyond those prescribed by the Agreement on Trade-Related Intellectual Property Rights (TRIPS). These FTAs, typically negotiated in secret, include the Trans-Pacific Partnership (TPP), Trade […]
By Arun Mohan Sukumar on 9 August, 2016
This briefing document articulates a grand strategy for India to pursue the development of cyber and cyber-physical weapons, with a view to manage conflicts and the future balance of power in Asia. Definitions A cyber weapon is any code-based instrument that relies exclusively on digital networks, capable either of damaging their integrity or penetrating them […]
All these definitions highlight the importance of cyber systems. Even more so because the very health and smooth operation of other critical infrastructures depends on this system.
The World Economic Forum (WEF), in its 2016 report, has depicted breakdown of CII, data fraud/theft and cyberattacks as a major linkage in the Global Risks Interconnections Map of 2016(WEF says a global risk is an uncertain event or condition that, if it occurs, can cause significant negative impact on several countries or industries within the next 10 years).
The evolving risk landscape 2007-2016 as projected by GEF12 is as under:
|Breakdown of CII||
Chronic disease in developed countries
Oil price shock
China’s economic hardlanding
Asset price collapse
Asset price collapse
Middle East instability
Failed and failing states
Oil and gas price spike
Chronic disease, developed world
Asset price collapse
Slowing Chinese economy (<6%)
Global governance gaps
Retrenchment from globalisation (emerging)
Asset price collapse
Slowing Chinese economy (<6%)
Global governance gaps
Severe income Disparity
Chronic fiscal imbalances
Rising greenhouse gas emissions
Water supply crises
Severe income Disparity
Chronic fiscal imbalances
Rising greenhouse gas emissions
Water supply crises
Mismanagement of population ageing
Extreme weather events
Unemployment and underemployment
Interstate conflict with regional consequences
Extreme weather events
Failure of national governance
State collapse or crisis
High structural unemployment or underemployment
Large-scale involuntary migration
Extreme weather events
Failure of climate change mitigation and adaptation
Interstate conflict with regional consequences
Major natural catastrophes
The trend of effects of cyberattacks on information infrastructure and data frauds or thefts is going to increase in the near future, as Internet of Things becomes the norm. The failure to comprehend risks related to networks of information systems may lead to breakdown of the CII, with far-reaching, adverse consequences on a nation’s economy, defence, and other critical services. A fundamental requirement for any nation is, therefore, to institute proper safeguards to protect CII from falling prey to an adversarial nation, terrorist organisations, hackers or lone wolves in the form of insiders or hackers.
Increased interdependency and total reliance on often vulnerable information systems has made CII prone to attacks. The threat landscape is depicted below:
Natural hazards such as floods, earthquakes, cyclones, tsunami, among others, can damage information systems, crippling dependent services like telecommunications, electricity grid, water supply and internet unusable. History is replete with examples such as the following:
(a) The Fukushima Accident. On 11 March 2011, following a major earthquake, a 15-metre tsunami disabled the power supply and cooling of three Fukushima Daiichi reactors in Japan, causing a nuclear accident. All three cores largely melted in the first three days. There have been no deaths or cases of radiation sickness from the nuclear accident, but over 100,000 people were evacuated from their homes. Official figures show that there have been well over 1,000 deaths from delaying the return of evacuees, in contrast to little risk from radiation if early return had been allowed13.
(b) Floods in Kashmir: All state-owned and private telecommunication networks were affected in the 2014 floods and there was no means of communication between various agencies. The only lifeline available was military communications, which was effectively used in coordinating relief activities.
It is clear natural calamities can play havoc to national ICC and it is important to factor them in while designing or planning protection to it.
Worldwide, CIIs remain under man-made threats, some of them explained below:
(a) Action by adversary a Nation-State: The resources required for a cyber attack on an adversary’s CII, viz., detailed intelligence and technical expertise, are generally available only at the national level. It can be an isolated attack in the cyber realm or in conjunction with kinetic operations. Cyber attacks by Russia on Georgia in 2008 were followed by kinetic operations (distributed denial of service, logic bombs).15 A classic example of an attack on CII is of Stuxnet, a malware used to damage the centrifuges used in an Iranian nuclear facility.16 Another case is the shutting down of Ukrainian electric grid on 23 December 2015. On that day, Ukrainian Kyivoblenergo, a regional electricity distribution company, reported service outages to customers due to a third party’s illegal entry into the company’s computer and supervisory control and data acquisition systems (SCADA). Seven 110 kV capacity and 23 35 kV capacity substations were hit, leaving about 225,000 customers were without power for three hours. Ukrainian news media reported a foreign attacker remotely controlled the SCADA distribution management system to cause the outage.17
Cyber security researcher Jeffrey Karr has proved that India’s INSAT 4B satellite was taken down in 2010 by Stuxnet to serve Chinese business interests. On 7 July 2010, a power glitch in the satellite forced India’s leading DTH providers such as Sun Direct, Doordarshan and Tata Teleservices to shift to ASIASAT-5, a satellite owned by the Chinese government. INSAT 4B was using the same Siemens software responsible for activating Stuxnet to disable the Iranian nuclear centrifuges.18
(b) Terrorists Organisations:
In 1998, ethnic Tamil guerrillas attempted to disrupt operations of Sri Lankan embassies by sending large volumes of e-mail. The embassies received 800 e-mails a day over a two-week period.19
(c) Embedded systems:
The information infrastructure, various networks and systems of government and private sector extensively leverage latest technology and commercial electronic components (hardware, software and firmware) procured from global sources. The global commercial procurement may offer availability of state-of-art technology at a competitive price but it can also potentially compromise the supply chain by adversary action in the form of intentional tampering during development, delivering a counterfeit or insertion of malicious software during maintenance, making it easy to manipulate, control or even paralyse another organisation’s systems and data.
In 1982, US President Ronald Reagan approved a CIA plan to transfer to the Soviet Union software used to run pipeline pumps, turbines and valves. The software, subsequently stolen by Russians in Canada, had embedded features—a logic bomb—designed to cause pump speeds and valve settings to malfunction. “The result was the most monumental non-nuclear explosion and fire ever seen from space,” noted former US Air Force Secretary and former Director of the National Reconnaissance Office, Thomas C. Reed, in his book At the Abyss: An Insider’s History of the Cold War.20.
(d) Hackers or Lone Wolves:
The possibility of attacks for disruption by individuals cannot be ruled out. This could be for any reason, viz., testing one’s own hacking capability or to show solidarity with a particular organisation, or for no reason at all. These persons are akin to misguided missiles. The probability of such lone wolf attacks exists as the entry cost is almost negligible. E Morzov in An Army of Ones and Zeros: How I became a soldier in Georgia-Russia Cyberwar mentions that he had much simpler research objective to carry out a cyber attack: to test how much damage someone who is quite aloof from the Kremlin physically and politically could inflict upon Georgia’s web infrastructure acting entirely on his own using only a laptop and internet connection. If successful, Morzov thought he could show that the field is open to anyone to launch a cyber attack against Georgia. This is what exactly happened—the individual without any technical knowledge and by just browsing through net for approximately half an hour had his e-Molotov cocktail ready to take on Georgia’s information systems.21
(d) Insider Threat:
The WikiLeaks is an example of the havoc that can be caused by an insider. If an individual employee of an organisation is compromised, much damage can be done to it. The insider threat can be in the form of a disgruntled employee, compromised worker or even an unintentional hiring of a cyber terrorist or a hacker.
(e) Lack of Training:
Skills and knowledge of staff plays an important role in preventing major losses to the organisation. If a person is inadequately trained, then the chances of CII disruption or damage by accident or mistake are higher.
The government of India has designated the National Critical Information Infrastructure Protection Centre (NCIIPC) of National Technical Research Organisation (NTRO) as the nodal agency under Section 70A (1) of the Information Technology (Amendment) Act 2008 for taking all measures including associated research and development for the protection of CIIs in India. Creation of the NCIIPC is a welcome move but making a technical intelligence agency responsible for the task is fraught with inherent limitations. Why a technical intelligence agency has been made responsible defies logic other than that cyberspace was within the charter of NTRO. An intelligence agency has its own work culture and limitations which forbid it to function in an open manner and interact in a joint platform with private enterprises. How the NCIIPC intends to engage civil and private organisations and major operators of information infrastructure is not yet known. The NCIIPC framework has to have coordination among industry and government. It is felt that CII protection should not be solely left to the NCIIPC; rather, it should follow a holistic, multi-stakeholder approach that requires a separate architecture with adequate wherewithal.
The networks of information systems are a complex phenomenon which demands actions from all stakeholders, viz., industry, government, technology experts, researchers and academia. The measures recommended are:
(a) At Critical Information Infrastructure
(i) Physical Security:
(a) If feasible the site of infrastructure should be such so as to have minimum damage in case of natural hazards
(b) Have layered physical security to incorporate well-trained guards, electronic surveillance, alarms, electronic locks, etc.
(c) Access control of sensitive areas in the form of biometric measures
(ii) E- Security:
(a) Securing the electronic systems from unauthorised access by means of encryption, making use of tools available like intrusion detection system, firewalls, etc.
(b) Taking measures to protect e-transmissions
(c) Having strong password policy
(d) Adequate well-rehearsed data disaster management drills
(iii) Human Capital:
(a) Training the staff in e-security
(b) Doing a background check on employees handling sensitive appointment or data
(i) Formulating a national strategy on CII protection
(ii) Identification of CII
(iii) Perform risk analysis based on various threat scenarios
(iv) Making available standard operating procedures
(v) Doing penetration testing of CII systems and analysing the weaknesses
(vi) Issuing advisories on implementing security programmes
(vii) Formulation of standards in consultation with industry
(viiii) Best practices
(ix) Coordination aspects related to CII among various agencies, including international
(x) Guidance on equipment hardening and testing of critical components
(xi) Public private partnership
(xii) Information-sharing on types of cyber attacks is critical to cyber security.
(c) International Level: The following issues need deliberation:
(i) Sharing of research and security initiatives
(ii) Legal issues
(iii) Development of security apparatus
No matter how much one tries, the vulnerabilities of CII will remain, especially when the hardware, firmware and software are produced globally. About 72 percent of Indian companies faced attacks in 2015.22 With rapidly growing interconnected, interdependent operations and digitisation (with Digital India as the flagship programme of the government), the cyber security challenges will only increase with time. Any damage to CII will have a direct impact on national security, economy and civil society. To safeguard and secure CII, all the stakeholders have to work together to evolve innovative security solutions.
By on 1 August, 2016
Abstract Critical information infrastructure (CII) is a pillar on which modern nations function. The revolution in information and communication technologies (ICT), apart from enhancing societal interactions and information diffusion, has improved the efficiency of organisations in all spheres of activity. These technologies, when developed, were guided by the need of open society for speed and […]
Abstract Technological advancement in artificial intelligence has created a situation where the deployment of Lethal Autonomous Weapons has become practically, if not legally, possible within a few years. As the international community struggles to arrive at a definition of ‘autonomous weapons’, the need to regulate their use has become paramount. Apart from the legal and […]
Non-registration due to lack of confidence in LEAs’ own capabilities
Both these factors highlight the need for capacity-building of LEAs.
The need for skills to work in cyberspace is being realised for all forms of crime, as most of the crimes have some element of cyberspace to an extent that even in a case of suicide, we have to check digital media for any suicide note or other relevant information. Therefore the training requirements have increased.
Large class sizes, different learning styles of users, refresher courses with outdated and non-operational content and the lack of immersive practice and post-training assessment could set as drawbacks of such rigid, pedagogic environments. Presenting low-risk and realistic interactions, immersion training technology brings the learning out of classrooms. Such adaptive training environments offer hands-on experiences that enrich analytical and technical skills of investigators and their situational (or risk) awareness of cybercrimes.
Traditional modes of training through books, boards, PowerPoint / PDF-based approach are not very suitable for trainings to combat cybercrime. There is need for more practical training, something based on simulated environments. However, given the need of volumes, the proposed methodology should be scalable.
The challenges of cybercrime trainings can be summarised as:
Traditional PowerPoint/ PDF-based approach not very suitable
Number of officers to be trained (volume)
Inaccurate assessments of needs of LEAs
The expanding ubiquity, frequency, and severity of cybercrimes entail LEAs to think beyond the one-size-fits-all training strategy. In devising new counter-responses, continual advancement in knowledge and skill of cybersecurity crimes is a core imperative.
Capacity-building for LEAs must be seen in the context of boosting the capabilities in these functional areas:
To detect cybercrimes
To receive complaints about cybercrimes
To be a first responder to the complaints about cybercrimes
To register criminal complaints about cybercrimes, with all details
To investigate cybercrime cases
To do forensic as well as data analytics related to cybercrime cases
To collect admissible evidence and launch prosecution in cybercrime cases
To prepare and launch public awareness campaigns to prevent cybercrimes
To work with researchers, academia and private sector to improve cyberspace security
To liaison with international LEAs and service providers
Further, efforts have to be made to equip them with:
Adequate staff with appropriate skillsets
Infrastructure for cybercrime investigation unit
Infrastructure for cyber forensic units (to aid investigation, which would be in addition to the forensic labs to give expert opinion for evidentiary purposes)
Appropriate standard operating procedures (SOPs)
A sound legal framework
Tie-ups with other stakeholders
Mechanisms for international cooperation and coordination
Cybercrimes introduce unanticipated risks and effects, creating greater urgency to equip investigators with new skillsets. One such area is the establishment of a cloud computing training platform that comprises a networked and nodal nature, parallel to that of cyber security.
This platform can be pivotal to increase shared knowledge and skills for investigators and connect LEAsand stakeholders. This cloud-based training system could encompass functions depicted in the diagram:
A centralised learning platform provides users from different professions about common objectives to address cybercrimes. Standardised courses enable frontline officers, prosecutors and data analysts with varying levels of cyber knowledge to acquire a consistent overview of investigations such as digital evidence handling, intelligence development and legal procedures.
Greater knowledge about how their roles contribute to investigations could lead to increased productivity and efficiency
Collaborative processes among investigators could be more streamlined and integrated at a global scale through this platform
Curriculum needs to be standardised by keeping in mind different roles of different LEAsand skill sets required for each role. A tentative list of roles would include:
First responder officer
Cybercrime intelligence analyst
Digital forensics specialist
Head of unit: Investigation/forensics
Senior LEA manager
This is the key component of this model. Besides preparation of traditional modes of training through books, boards, power point/PDF-based approach, there is a strong need for more trainings based on simulated environments. This would mean creation of scenarios, including digital exhibits (logs, etc.) for extraction by trainees using forensic tools preloaded on the infrastructure, using appropriate procedures.
In cyberspace, criminals keep on adopting new modus operandi every day and therefore, simulation-based training methodology has to be contemporary. To develop new scenarios, it is important to keep abreast of new modus operandi and technology trends. This part would include:
Knowledge exchange on current and emerging methods of operations (or modus operandi) of cybercriminals
Within this platform, training courses could stress-test the computing skills of cybercrime experts to analyse and discern signals collected from hacker forums, internet relay chat rooms and messaging texts
Attacks like phishing and tampering, advanced persistent threats, backend systems and reverse-engineering could be simulated.
Combating cybercrime could take more than technical skills and require cross-disciplinary knowledge. Researchers must look at the best practices to stay ahead of hackers by understanding indicators of malware victimisation, the ecology of trust and motivation among hackers, online hacker communication and interaction styles
Gaining practice in such knowledge exchanges could shed light on how hacker communities interact and share information, creating actionable intelligence for cybercrime investigations
Able to develop the best science to help advance cyber security training and research
Feedback gathered from learner usage and experience must be utilised to design new knowledge capacity and material
The modules should be developed by subject-matter-experts, ensuring quality content is constantly updated
Training courses should be more reflective of real-world cases and incidents
Maintain engagement with users by tapping into learners’ interests, offering appropriate challenges and increasing motivation
This platform could allow performance-based certification to demonstrate that users know what to look for and what actions to take during a cyber incident
Assess if knowledge or skills have been practically transferred
Automated scoring and self-assessments in different areas of cybercrime
Provide critical insights into the effectiveness of training platform
This platform will probe how internet-enabled technologies and wearables impact cybersecurity, policing and how crime could be conceived
A horizontal approach should involve cyber experts and technology innovators of LEAs from different countries share their cyber investigative products and threat assessments
Police agencies should perform SWOT analysis of their cyber capabilities and identify the next steps for improvement, providing insights into the different needs and stages of cyber capacity development for individual countries
Vertically, the expanse of future internet-enabled crimes could be analysed at national, regional, and international levels
This platform will allow new relationships with other nodes within the networks of the cybersecurity architecture
Effective collaboration and greater harmonisation provide a more accurate and comprehensive assessment of cyber criminality, ensuring responses are coordinated, effective and timely
Law enforcement collaborations with the private sector to explore and design complex simulations of future communications technologies that are prone to criminal exploitation, improve cyber security skills at all levels and work with associated professions to make industry more resilient to cybercrime.
Disclaimer: the views in the paper are personal and do not reflect views of INTERPOL or the Government of India.&p[url]=http://www.digitalpolicy.org/national-capacity-strengthening-to-combat-cybercrime/&&p[images]=http://www.digitalpolicy.org/wp-content/uploads/2016/07/I-1-1-150x150.jpg" target="_blank">
By on 21 July, 2016
Described as stealthy, technically complex, tenacious, well-financed, and motivated by profit or strategic advantage, the spectrum of cybercrime defies every periphery. The cop-and-thief technological arms race remains as an enduring paradox of this digital terrain. Criminals across the globe, unfailingly strive to counteract innovations in cyber bulwarks for hardware and software.
The information infrastructure is increasingly under attack by cyber criminals. The number, cost, and sophistication of attacks are increasing at exponential rates. Most of these attacks are transnational by design, with victims spread throughout the world, necessitating multi-jurisdictional or trans-national investigations.
PCA helped to reveal the internal structure of the data in a way that best explained their variance. Subsequently, using the dimensions uncovered in PCA, linear regression was used to posit the strengths of each identified dimension in contributing to ‘Perceived Impact’.
The PCA gave three components that explained 40.89 percent, 36.53 percent and 8.71 percent of the variation. Based on the underlying semantics of the attributes that respectively loaded on to each of these dimensions, these were labeled as following:
Enhanced Scope of Work
Enhanced Scope of Work: This component explained the highest level of variance (40.89 percent). Most elements from ‘Economic Capital’ mapped onto this component. This component reflects growth in business or support for professional growth. The attributes that loaded on this relate to skill enhancement and selling of new products, increase in business, new opportunities, geographical reach, reduction in travel time, availability of new information, intensified competition, efficiency, reduction in waiting time and bringing down the expenses, professional contacts and searching for new topics.
Empowerment: This component explains the second highest level of variance (36.53 percent). Most elements from ‘Social Capital’ and ‘Knowledge Capital’ mapped onto this component. Elements in this component reflect the ability to manage rural vulnerabilities. These vulnerabilities are not just related to the poor information availability that is characteristic of rural areas but also that arise due to lack of physical infrastructure and poor earning opportunities. The aspect of Internet use for managing vulnerabilities has not been considered in previous studies. This study groups the elements that mapped on this component into four categories that characterise the vulnerabilities:
Informational: This arises due to being able to get current information on the Internet. This has to be seen in the rural context where respondents have challenges in accessing information. Nearly 60 percent of the respondents had stated that they perceive that the information that they get is late or not current and Internet use helps them to overcome this barrier.
Linkage: This is measured by attributes such as ease of staying in touch, and ability to maintain near and distant social ties. This ability must be viewed in the rural context where organising face-to-face meetings can be a huge challenge due to poor road infrastructure and transport availability.
Institutional: This is related to the ability to contact people during emergencies, improving current ability to earn and managing hardships associated with physical travel related to work (in rural areas, infrastructure and services related to travel are poor).
Knowledge Creation and Cognition: This relates to facilitating users to understand a subject matter better, the use of videos for the same, sharing knowledge with other similarly interested people, higher preparedness and confidence with respect to the work environment, and ability to better understand the linkages amongst different topics. This was captured through assessing impact by viewing videos for learning and understanding subjects, getting a chance to talk to people interested in the same topics, understanding linkages among related topics, exchanging ideas about work, help in being more confident, in expectation of work/job requirement. These aspects highlight the vulnerability of accessing knowledge in a rural area, where there is poor access to knowledge resources like schools, libraries, and experts. This study shows that knowledge creation and cognition loads on the dimension that has many variables from the social dimension.
Transactional Efficacy: This is the third dimension and it explains 8.7 percent of variance. The two attributes related to it are, a) Extent of on-line transactions; and b) Getting feedback on business/work related issues.
To uncover how the three latent dimensions identified through PCA contribute to ‘Perceived Impact’, we ran a multiple regression, using the principal components identified earlier as the independent variables and the ‘Perceived Impact’ as the dependent variable.
Out of the three factors, only ‘Empowerment’ and ‘Enhanced Scope of Work’ were significant with p-values < 0.001 and 0.004. The path loading of ‘Transactional Efficacy’ despite being positive was insignificant with a p-value of 0.117.
R2 value for model was 0.907, showing that a high degree of variation is explained by the model.
a) The effect of ‘Empowerment’ on ‘Perceived Impact’ is significant and positive. For the rural user, the highest impact of the Internet is through ‘Empowerment’.
b) The effect of ‘Enhancement of Scope of Work’ on ‘Perceived Impact’ is significant and negative. The negative sign is counter-intuitive. However, this could be explained by understanding the theory behind satisfaction formation. This study uses Disconfirmation Theory that stipulates that satisfaction from Internet use is mainly determined by the gap between cognitive standards and desires or expectations, and perceived performance to explain the negative sign. Negative disconfirmation arises when the perceived performance, especially for Internet-based services, is below expectation or desires. In the context of this study, the above indicates that possibly individuals who used the Internet had high levels of desires and expectation on the dimension of ‘Enhancement of Work Scope’ by using the Internet. The outcomes on this dimension were lower than their desires and expectations, leading to a negative perception. This gap could be due to the novelty factor and the changing nature of scope of features and services available on the Internet that create dynamic determinants of satisfaction. Such changes could lead to users possibly having low self-efficacy and higher negative disconfirmations. An alternative explanation could be that the gap was due to the individuals not getting enough support for enhancing their scope of profession as there may not be enough or relevant content for individuals in rural areas. In addition, lack of content in local language, poor presence of local websites, inadequate quality of Internet connectivity and meagre Internet penetration lead to low levels of perceived performance. High expectations and desires could be driving the negative disconfirmation and thus the negative sign on this dimension. On the other hand, the ‘Enhancement of Scope of Work’ is significant in terms of its ‘Perceived Impact’.
c) The effect of ‘Transaction Efficacy’ on ‘Perceived Impact’ is insignificant. This could be due to the low levels of transactions by the survey respondents. The reason for this could be because Internet services in the survey area had become available only a few months earlier and may not have had high levels of service quality in the initial phases. Studies of Internet adoption indicate that users initially begin with the usage of Internet for social purposes. Only when they feel comfortable with various uses of Internet and see the benefits of online transactions, they may graduate to it. Online transactions for e-commerce are a relatively new phenomena in India and many individuals in rural areas may not be able to participate on account of not having Internet banking, delivery of services to rural area, and lack of trust in online transactions.
This study developed a model for identifying constructs that influence ‘Perceived Impact’ of the Internet on rural users in India. The two constructs that influence ‘Perceived Impact’ are ‘Empowerment’ and ‘Enhanced Scope of Work’. While highlighting the role of Social, Economic and Knowledge capital, Internet users in rural India emphasised the aspect of ‘Empowerment’ on ‘Perceived Impact’. ‘Empowerment’ embeds the social, knowledge creation and cognition aspects and highlights the role of Internet use in managing vulnerabilities of a rural context. The model used in this study highlighted that knowledge creation and cognition on the Internet is perceptually recognised as having a social dimension.
The specific role of the Internet in overcoming vulnerabilities, was in terms of overcoming the informational, linkage, institutional, and knowledge creation and cognition gaps in rural areas. This aspect had not been considered in previous studies.
Theory of Disconfirmation regarding satisfaction of Internet services vis-à-vis desired and cognitive expectations in relation to the perceived performance of Internet services at the current levels of Internet penetration and adoption helped to explain the negative effect of ‘Enhanced Scope of work on ‘Perceived Impact’.
This study was done at an early stage of Internet deployment in India’s rural areas. At this stage adoption may not have been high and service quality may not have been adequate. These factors could influence the ‘Perceived Impact’. Although the model used for this study does not take into account the Quality of Service (QoS) explicitly, it is possible that users’ decision to adopt certain features of Internet services may depend on it. For example, poor QoS could lead individuals to not adopt online banking, as they may not be sure whether their transaction would go through given the poor quality of services.
A longitudinal study to study how the different dimensions of ‘Perceived Impact’ change over time would provide rich data on the stages of ‘Perceived Impact’ of the Internet. This study focused only on Internet users. Further work needs to be done to make it applicable to a wider population.
Bandura, A. (1997). Self-Efficacy: The Exercise of Control, New York: Freeman.
Lin, N. (2001). Social capital: A theory of social structure and action, Cambridge, UK: Cambridge University Press.
Ranchi: It is situated in one of India’s most backward states (Figure 2). As in most backward rural areas, many villages in Ranchi district had poor connectivity. Airjaldi has covered around 60 villages in five blocks near Ranchi (Ormanjhi, Kanke, Angara, Gola, Patratu), by providing them low-cost wireless Internet broadband. The population of all the five blocks is approximately 14,34,649.
Figure 2: Ranchi district map. Source: Mapsofindia.com, accessed on February 28, 2015
Guna: The second site was also in an economically backward part of India, at Guna, in Madhya Pradesh (Figure 3). The population of Guna is 137,175. DEF has provided wireless Internet broadband in this part through innovative low-cost technology. DEF largely provided connectivity on the periphery of the two small towns of Guna and Shivpuri and six villages around them that were away from the city.
Figure 3: District map of Guna. Source: Mapsofindia.com, accessed on February 28, 2015
This working paper is part of the project titled, “Evolving Policy for Spectrum Management through Impact Assessment of Wireless Technology and Broadband Connectivity in Rural India”. The research team would like to acknowledge the funding support provided by the Ford Foundation.
We would also like to acknowledge the research assistance provided by Ms Kavita Tatwadi, Ms Shivangi Mishra and Ms Sneha Jhala, Research Associates at IITCOE, and Mr Rishabh Dara, PhD Scholar at IIMA.
By on 21 July, 2016
Acknowledgement of Funding Support and Research Assistance:
This working paper is part of the project titled “Evolving Policy for Spectrum Management through Impact Assessment of Wireless Technology and Broadband Connectivity in Rural India”. We would like to acknowledge the funding support provided by the Ford Foundation.
We would also like to acknowledge the research assistance provided by Ms Kavita Tatwadi, Ms Shivangi Mishra and Ms Sneha Jhala, Research Associates at IITCOE, and Mr Rishabh Dara, Doctoral Scholar at IIMA for this project.
The purpose limitation along with the data minimisation principle—which requires that no more data may be processed than is necessary for the stated purpose—aim to limit the use of data to what is agreed to by the data subject. These principles are in direct conflict with new technology which relies on ubiquitous collection and indiscriminate uses of data. The main import of Big Data technologies on the inherent value in data which can be harvested not by the primary purposes of data collection but through various secondary purposes which involve processing of the data repeatedly.17Further, instead to destroying the data when its purpose has been achieved, the intent is to retain as much data as possible for secondary uses. Importantly, as these secondary uses are of an inherently unanticipated nature, it becomes impossible to account for it at the stage of collection and providing the choice to the data subject.
Followers of the discourse on Big Data would be well aware of its potential impacts on privacy. De-identification techniques to protect the identities of individuals in dataset face a threat from an increase in the amount of data available either publicly or otherwise to a party seeking to reverse-engineer an anonymised dataset to re-identify individuals. 18 Further, Big Data analytics promise to find patterns and connections that can contribute to the knowledge available to the public to make decisions. What is also likely is that it will lead to revealing insights about people that they would have preferred to keep private.19In turn, as people become more aware of being constantly profiled by their actions, they will self-regulate and ‘discipline’ their behaviour. This can lead to a chilling effect.20 Meanwhile, Big Data is also fuelling an industry that incentivises businesses to collect more data, as it has a high and growing monetary value. However, Big Data also promises a completely new kind of knowledge that can prove to be revolutionary in fields as diverse as medicine, disaster-management, governance, agriculture, transport, service delivery, and decision-making.21 As long as there is a sufficiently large and diverse amount of data, there could be invaluable insights locked in it, accessing which can provide solutions to a number of problems. In light of this, it is important to consider what kind of regulatory framework is most suitable which could facilitate some of the promised benefits of Big Data and at the same time mitigate its potential harm. This, coupled with the fact that the existing data protection principles have, by most accounts, run their course, makes the examination of alternative frameworks even more important. This article will examine some alternate proposals made to the existing framework of purpose limitation below.
Some scholars like Fred Cate22 and Daniel Solove23 have argued that there is a need for the primary focus of data protection law to move from control at the stage of data collection to actual use cases. In his article on the failure of Fair Information Practice Principles,24Cate puts forth a proposal for ‘Consumer Privacy Protection Principles.’ Cate envisions a more interventionist role of the data protection authorities by regulating information flows when required, in order to protect individuals from risky or harmful uses of information. Cate’s attempt is to extend the principles of consumer protection law of prevention and remedy of harms.
In a re-examination of the OECD Privacy Principles, Cate and Viktor Mayer Schöemberger attempt to discard the use of personal data to only purposes specified. They felt that restricting the use of personal to only specified purposes could significantly threaten various research and beneficial uses of Big Data. Instead of articulating a positive obligations of what personal data collected could be used for, they attempt to arrive at a negative obligation of use-cases prevented by law. Their working definition of the Use specification principle broaden the scope of use cases by only preventing use of data “if the use is fraudulent, unlawful, deceptive or discriminatory; society has deemed the use inappropriate through a standard of unfairness; the use is likely to cause unjustified harm to the individual; or the use is over the well-founded objection of the individual, unless necessary to serve an over-riding public interest, or unless required by law.”25
While most standards in the above definition have established understanding in jurisprudence, the concept of unjustifiable harm is what we are interested in. Any theory of harms-based approach goes back to John Stuart Mill’s dictum that the only justifiable purpose to exert power over the will of an individual is to prevent harm to others. Therefore, any regulation that seeks to control or prevent autonomy of individuals (in this case, the ability of individuals to allow data collectors to use their personal data, and the ability of data collectors to do so, without any limitation) must clearly demonstrate the harm to the individuals in question.
Fred Cate articulates the following steps to identify tangible harm and respond to its presence:26
The proposal by Cate argues for what is called a ‘use based system’, which is extremely popular with American scholars. Under this system, data collection itself is not subject to restrictions; rather, only the use of data is regulated. This argument has great appeal for both businesses who can reduce their overheads significantly if consent obligations are done away with as long as they use the data in ways which are not harmful, as well as critics of the current data protection framework which relies on informed consent. Lokke Moerel explains the philosophy of ‘harms based approach’ or ‘use based system’ in United States by juxtaposing it against the ‘rights based approach’ in Europe.27 In Europe, rights of individuals with regard to processing of their personal data is a fundamental human right and therefore, a precautionary principle is followed with much greater top-down control upon data collection. However, in the United States, there is a far greater reliance on market mechanisms and self-regulating organisations to check inappropriate processing activities, and government intervention is limited to cases where a clear harm is demonstrable.28
Continuing research by the Centre for Information Policy Leadership under its Privacy Risk Framework Project looks at a system of articulating what harms and risks arising from use of collected data. They have arrived a matrix of threats and harms. Threats are categorised as —a) inappropriate use of personal information and b) personal information in the wrong hands. More importantly for our purposes, harms are divided into: a) tangible harms which are physical or economic in nature (bodily harm, loss of liberty, damage to earning power and economic interests); b) intangible harms which can be demonstrated (chilling effects, reputational harm, detriment from surveillance, discrimination and intrusion into private life); and c) societal harm (damage to democratic institutions and loss of social trust).29For any harms-based system, a matrix like above needs to emerge clearly so that regulation can focus on mitigating practices leading to the harms.
Lokke Moerel and Corien Prins, in their article “Privacy for Homo Digitalis – Proposal for a new regulatory framework for data protection in the light of Big Data and Internet of Things”30 use the ideal of responsive regulation which considers empirically observable practices and institutions while determining the regulation and enforcement required. They state that current data protection frameworks—which rely on mandating some principles of how data has to be processed—is exercised through merely procedural notification and consent requirements. Further, Moerel and Prins feel that data protection law cannot only involve a consideration of individual interest but also needs to take into account collective interest. Therefore, the test must be a broader assessment than merely the purpose limitation articulating the interests of the parties directly involved, but whether a legitimate interest is achieved.
Legitimate interest has been put forth as an alternative to the purpose limitation. Legitimate is not a new concept and has been a part of the EU Data Protection Directive and also finds a place in the new General Data Protection Regulation. Article 7 (f) of the EU Directive31 provided for legitimate interest balanced against the interests or fundamental rights and freedoms of the data subject as the last justifiable reason for use of data. Due to confusion in its interpretation, the Article 29 Working Party, in 2014,32looked into the role of legitimate interest and arrived at the following factors to determine the presence of a legitimate interest— a) the status of the individual (employee, consumer, patient) and the controller (employer, company in a dominant position, healthcare service); b) the circumstances surrounding the data processing (contract relationship of data subject and processor); c) the legitimate expectations of the individual.
Federico Ferretti has criticised the legitimate interest principle as vague and ambiguous. The balancing of legitimate interest in using the data against fundamental rights and freedoms of the data subject gives the data controllers some degree of flexibility in determining whether data may be processed; however, this also reduces the legal certainty that data subject have of their data not being used for purposes they have not agreed to.33However, it is this paper’s contention that it is not the intent of the legitimate interest criteria but the lack of consensus on its application which creates an ambiguity. Moerel and Prins articulate a test for using legitimate interest which is cognizant of the need to use data for the purpose of Big Data processing, as well as ensuring that the rights of data subjects are not harmed.
As demonstrated earlier, the processing of data and its underlying purposes have become exceedingly complex and the conventional tool to describe these processes ‘privacy notices’ are too lengthy, too complex and too profuse in numbers to have any meaningful impact.34The idea of information self-determination, as contemplated by Westin in American jurisprudence, is not achieved under the current framework. Moerel and Prins recommend five factors35 as relevant in determining the legitimate interest. Of the five, the following three are relevant to the present discussion:
Replacing the purpose limitation principles with a use-based system as articulated above poses the danger of allowing governments and the private sector to carry out indiscriminate data collection under the blanket guise that any and all data may be of some use in the future. The harms-based approach has many merits and there is a stark need for more use of risk assessments techniques and privacy impact assessments in data governance. However, it is important that it merely adds to the existing controls imposed at data collection, and not replace them in their entirety. On the other hand, the legitimate interests principle, especially as put forth by Moerel and Prins, is more cognizant of the different factors at play — the inefficacy of existing purpose limitation principles, the need for businesses to use data for purposes unidentified at the stage of collection, and the need to ensure that it is not misused for indiscriminate collection and purposes. However, it also poses a much heavier burden on data controllers to take into account various factors before determining legitimate interest. If legitimate interest has to emerge as a realistic alternative to purpose limitation, there needs to be greater clarity on how data controllers must apply this principle.
Introduction Last year, Mukul Rohatgi, the Attorney General of India, called into question existing jurisprudence of the last 50 years on the constitutional validity of the right to privacy.1 Mohatgi was rebutting the arguments on privacy made against Aadhaar, the unique identity project initiated and implemented in the country without any legislative mandate.2 The question […]
‘Cybersquatting’ is a term used to refer to the deliberate, bad-faith, abusive registration of domain names, with the speculative intent of reserving trademarks as internet domain names with the mala fide purpose of selling them to the trademark owners at a much higher price.14 A corollary of this is ‘cyberpiracy’, where there is a violation of copyright, as the registrant uses a confusing domain name to mislead customers to their website for commercial gain.
An interesting high-profile case of cybersquatting occurred in 1995, when American entrepreneur, Dennis Toeppen registered 200 domain names corresponding both with generic terms as well as with trademarks of the times, going on to state that domain names were, “…valuable underdeveloped real estate…(with) absolutely no statutory or case law regarding trademarks in the context of Internet domain names at the time.”15 At that time, the only option available for complainants who were obviously disgruntled by this turn of events was to approach the court in trademark-related litigation.16 Toeppen was eventually sued and found guilty of trademark dilution, and the court opined that cybersquatting was an effective threat to the proper functioning of trademarks.17
Traditionally, trademark infringement is based on the notion of there being a likelihood of confusion in the minds of the consumers as a result of the alleged infringing act.18 This is easily applied to cyberpiracy cases; however, the issue of cybersquatting will not always fall under this definition, as the registration of the domain name does not cause confusion in the minds of customers. Regardless, though, it is violative of the rights of the holder of the trademark. Most of the time, such a cybersquatter will not post anything on the website, implying that she would have used the mark, which is required to be proven in cases of infringement.19
Trademark dilution is another avenue by which the mark owners have traditionally sought to bring action against the rights holders.20 In the US, the Federal Trademark Dilution Act provided for injunction in the event that another person’s commercial use of a trademark caused dilution of the distinctive quality of the trademark.21 This kind of dilution could imply blurring – i.e., the unauthorised use of a mark for dissimilar goods that may reduce the value of the mark as a unique identifier22 – or tarnishment, i.e., the association of a mark with an inferior quality such that its reputation is harmed.23 However, while most cases of cybersquatting may fall under these categories,24 in some instances, the act of cybersquatting may lead to neither, as seen in the case of Panavision v. Toeppen,25 where the defendant merely registered the trademarked domain name and sat on with the intention of selling to the defendant without seeking to post anything. The court based its decision on the harm that was caused as the Panavision mark’s capacity to act as a distinguishing factor for its goods and services on the internet had been reduced. The implication of this kind of logic was that the likelihood of the trademark holder winning in case of ‘famous’ marks would be higher, while the rights of those who held marks that were not famous might not be adequately protected, as the dilution would be harder to prove, or, as seen in the Hasbro26 case, the courts might extend broadly the definition of what amounted to a famous mark.
In 1999, the US Congress passed the Anticybersquatting Consumer Protection Act, creating civil liability for cybersquatting, and expanded the reach of the existing law beyond ‘famous’ marks, but limited commercial use to instances where there was a bad-faith intent.27 However, fair-use exceptions were allowed in this regard.
It is pertinent to note that few other jurisdictions have a law specifically to deal with cybersquatting – the UK and Germany, for instance, resolve cybersquatting matters using trademark law.28 A landmark case in this regard is Harrods v. UK Network Service Ltd.29 as per which the court accepted that the law related to trademark and passing off could extend to domain names.
Uniform Domain Name Dispute Resolution Policy
The UDRP is an international dispute resolution procedure that is under the mandate of ICANN, which allows trademark holders to challenge the registrant of an Internet domain name, and solve the issue by agreement, court action or arbitration.30 As Froomkin has analysed, the power of the UDRP stems from ICANN’s solo control over one of the most critical internet resources, viz., domain names.31 One can only acquire domain names from a registrar with the right to allocate names in an ICANN-accredited domain name registry.32 It is the authority of ICANN which decides which registries are authoritative.
This policy came up in 1999, after the United States Department of Commerce gave ICANN the authority to control domain name registrations.33 Prior to this, the NSI had its own dispute resolution policy, created in 1995, according to which the complainant had to give notice to the registrant that there had been a violation of trademark, following which the registrant would have to attempt to prove that she owned a trademark in the contested name, failing which the NSI would put the domain name on hold till resolution was reached between parties, either through agreement or through litigation.34 This was obviously problematic as the NSI did not actually provide a process for the resolution of disputes in the first place. It was also argued that this policy facilitated reverse domain name hijacking through the ease by which the alleged infringer’s domain name could be placed on hold, and the limited defence available to the registrant consisting solely of showing that she owned the trademark in the domain name in question.35
In 1999, after the inception of ICANN, the NTIA, through a White Paper, called for a study on domain names and trademark issues from the WIPO, inviting the WIPO to “…(1) develop recommendations for a uniform approach to resolving trademark/domain name disputes involving cyberpiracy (as opposed to conflicts between trademark holders with legitimate competing rights), (2) recommend a process for protecting famous trademarks in the generic top level domains, and (3) evaluate the effects, based on studies conducted by independent organizations.”36 In this White Paper, the NTIA acknowledged that the rights of trademark holders were getting affected, while at the same time conceding that there were also other legitimate rights to counter these that needed protection.37 WIPO recommended that the scope of the new policy be restricted to bad faith, abusive domain name registrations or cybersquatting, and not be extended to disputes between two parties with legitimate competing interests, and that it be able to ameliorate the use of court litigation to resolve such processes.38 In September 1999, the draft of the implementation documents for this policy were prepared, and after public comment, ICANN adopted a second set of implementation documents, resulting in the UDRP, adopted in January 2000.39
The UDRP is a policy that has been adopted by all ICANN-accredited registrars, and it is applicable between the registrar and the customer.40 Paragraph 4(a) outlines when the domain-name holder is required to submit to a mandatory administrative proceeding, in the event that there is a complaint made such that (i) the domain name is identical or confusingly similar to a trademark or service mark in which the complainant has rights; (ii) the domain name holder has no rights or legitimate interests in respect of the domain name; and (iii) the domain name has been registered and is being used in bad faith by the holder of said name.41 All these three elements must be present in the complaint. “Bad faith” as used in Paragraph 4(a)(iii) has been defined in the policy to cover any of four instances – first, where the circumstances indicate that the domain name was acquired for the sole purpose of transferring to the complainant (the legitimate owner of the trademark) for excessive consideration (i.e., not in cases where the consideration covered reasonable costs); second, where the registration was to prevent the owner of the mark from reflecting the mark in a corresponding domain name; third, where it was registered to disrupt a competitor’s business; and fourth, where it was registered to create a likelihood of confusion to attract internet users towards the website for commercial gain.42 This is a non-exclusive list, and arbitrators have added to the definition.
ICANN provides for three affirmative defences for the domain-name holder in whose name the domain has been registered.43 One protects those who can demonstrate that they intended to use the domain name in connection to a bona fide offering of goods or services prior to them receiving the notice.44 The second defence is that the holder of the domain name has acquired recognition by the domain name despite him lacking a trademark. 45The third is that the domain name holder is using the name legitimately for non-commercial purposes, or his use qualifies as fair use without any intent for commercial gain to mislead consumers.46 This drew from US Trademark law at the period when the UDRP was introduced, when a purely non-commercial use was seen as a strong defense against an infringement claim; however, it drew a limit around this by providing for the tarnishment concept as a restriction on such non-commercial use.47
The policy creates the institution of the “Provider”, i.e., an administrative dispute resolution provider, and gives the complainant the power to select the provider from amongst those approved by ICANN.48 Currently, there are five such providers – the Asian Domain Name Dispute Resolution Centre; the National Arbitration Forum; WIPO; the Czech Arbitration Court Arbitration Center for Internet Disputes; and the Arab Center for Domain Name Dispute Resolution.49 The provider selected then goes on to administer the proceeding, appointing a panel for the same.50 The proceedings are guided by the Rules of Procedure.51
In terms of costs, the fees are to be borne by the complainant, unless the domain name holder chooses to expand the administrative panel in size from one to three.52 ICANN itself refrains from participation in the proceedings.53 This proceeding, most importantly, does not bar the jurisdiction of the courts.54
Another persistent criticism of the UDRP is its costs. While the expenditure is certainly less than that of litigation, especially under the US Court system, it nonetheless costs the defendant anything between $5000-$10000, and possibly more as the matter is pursued.55 Given that there is a significant chunk of defaults, there is a tendency for meritless complaints to be filed, for which the brunt is often borne by innocent respondents who do not meet the bad-faith description. This is unethical, and the loophole needs to be repaired.
Most arbitration is not precedent-based, and the UDRP is no exception to that rule; however, in many cases under the UDRP, precedent has been cited and used to arrive at a reasoning behind the cases.56 Mueller goes on to state that most of these cases that have set a precedent have resulted in an expansive interpretation of the rights of the trademark holder, and the definition of ‘bad faith’ has been inflated.57 Telstra v. Nuclear Marshmallows,58 for instance, has served as a precedent for the principle that even the passive holding of a domain name constitutes bad-faith usage (even if there is no actual use). There is also a case which goes to state that it is bad faith if the respondent is not contributing any value to the internet59 which has been frequently cited, which stretches the definition to breaking point. Thus, there seems to be excessive power vested upon the panellists deciding UDRP disputes, which is problematic as they have demonstrated arbitrary decision making, which often results in de facto precedent being set. Another related criticism is that the result of the UDRP dispute is heavily contingent on the panellists who are chosen to arbitrate the case, as well as the Provider which is used.60 For instance, the WIPO and the NAF are perceived widely as being biased in favour of complainants, and there is therefore evidence of extensive forum shopping by complainants – in 2014, for instance, WIPO decided 2634 UDRP cases, and NAF decided 1557, of which complainants prevailed a massive 92 percent of the time if there was a default (which was true for about 1000 of the cases), while when the respondent did respond, the success rate for them was about 46 percent.61 This could possibly be because of the composition of these panels. WIPO panels are largely composed of trademark lawyers and professors, while NAF is constituted largely by retired US justices.61 It has also been seen that there is a dramatic difference in outcomes when decided by three-member panels vis-à-vis single-member panels – the win rate by complainants was about 83 percent in 2001 for single-member panels, as compared to 60 percent in three-member panels.63
The Trademark Clearinghouse was conceptualised in 2012, when ICANN announced that it was going to introduce a new gTLD programme to expand the number of generic Top-Level Domains beyond the existing 21 known ones.64 This new policy allows anyone, for a fee, to register any new top level domain.65 ICANN had introduced this policy as a means to protect trademark owner’s rights to prevent overreliance on .com, and to allow trademark holders to apply for . protections to create a secure online location for their brand.66 ICANN envisioned this as reducing the problem of cybersquatting and subsequent dilution of the brand.
The Trademark Clearinghouse has been described as “…the most important rights protection mechanism built into ICANN’s new gTLD programme.”67 It is a centralised database of verified trademarks, connected to the new gTLDs that ICANN with launch.68 Once the trademark holder places his or her mark in the Clearinghouse, s/he can register the trademark as a second-level domain under a newly delegated gTLD before the registry opens the gTLD to the general public, and once a legitimate trademark holder places his/her mark in the Clearinghouse, no one else can obtain the second-level domain.69 For instance, for the gTLD .icecream, if Haagen Dazs chooses to place their mark in the Clearinghouse, they would get priority over the domain name haagendazs.icecream by virtue of their trademark. However, if there was another company by the same name, they would not have the same priority rights. It was conceptualised in 2009 by a team of 18 IP specialists who were entrusted with the task of producing a report to find solutions for potential trademark risks stemming from the new gTLD expansion programme.70 One of their most important recommendations was the creation of an Intellectual Property Clearinghouse, which they felt would adequately protect the existing rights of trademark owners. The clearinghouse would be sustainable, and benefit registries, users as well as consumers.71 They outlined the following roles for the IP Clearinghouse:72
Some of their more important recommendations were that the IP Clearinghouse be maintained by a neutral service provider, be able to accommodate identical trademarks registered under different classes of goods and geographical locations given the territorial nature of trademark law, be able also to accommodate all types of registered trademarks (Words, logos, and others).73 Finally, it recommended that the IP Clearinghouse provide the following services74 – (a) annual validation of trademark rights, (b) a Globally Protected Marks List, (c) A Pre-Launch IP Claims Service76 and (d) data for taking part in Uniform Rapid Suspension mechanisms before the registration of the trademark.77
Based on these recommendations and public comments, ICANN launched the TMCH policy in March 2013.78
The procedure for this has been outlined in the Applicant Guidebook issued by ICANN. To acquire a new gTLD, an applicant is to submit a fee of $185,000 and indicate whether it wants to apply for a .brand gTLD (say .mamagoto), an industry gTLD (say .basil), a geographic gTLD (say .Digboi) or a non-Latin script gTLD (say .मैंदिखाऊं).79 The application must also be marked as either being ‘open’ or ‘community-based’.80 The former gives the applicant the right to restrict the gTLD, exclude competition and/or restrict it. The latter implies that the applicant must use the gTLD for the benefit of the community in question.81 If the application then passes the evaluation stage, the applicant must mandatorily enter into an agreement with ICANN to provide trademark protection through a mandatory Trademark Claims Service and Sunrise Service Policy for a minimum 30 days before the launch—this enables trademark holders to protect against use of their mark by precluding third parties from registering the identical domain under that gTLD.82 This is where the role of the Trademark Clearinghouse becomes important. The Trademark Clearinghouse is not a rights protection mechanism unto itself – it is essentially a database which provides information to the new gTLD registries for supporting pre-launch Sunrise or Claims Services.83 Word marks that can be included in the Clearinghouse include nationally or regionally registered marks, marks validated through judicial proceedings, and marks protected by a statute or treaty in effect at the time the information is submitted into the Clearinghouse as well as other marks constituting intellectual property”.84
The Sunrise Service Policy allows trademark holders the opportunity to register domain names related to their marks before the names are available to the public, a period of about 30 days.85 If a given TLD (say .icecream) is applicable to a certain good or service, the owner of a trademark in this industry (say Kwality) would be likely to want to ensure that its mark is registered under the TLD, and the Sunrise period facilitates that in an expeditious way. The Trademark Claims service lasts for about 90 days following the initial registration, and if, during this period, there is an attempt to register a domain name matching a mark recorded in the Trademark Clearinghouse, the person making the attempt will receive a notification; if they still go ahead and register the domain name, the TMCH will send a notice to the trademark holder, who can then request the registry to take action to resolve the issue immediately.86 However, this remains the limited extent of rights available under this mechanism. While they do not inherently prevent infringement, if registration proceeds even after receipt of notice, it is easier to prove bad faith. It is probably anticipated that rights holders use TMCH in conjunction with the UDRP or the URS. Moreover, if the holder of the trademark has previously won a UDRP or a court decision on the trademark it is filing with the TMCH, the trademark owner can elect to file a verified trademark and 50 names similar to that trademark under the “Abused Doman Name Label Service.”87 TMCH validates the URS procedure, as recommended in the 2009 report discussed earlier.
It is important to note that the TMCH exists merely to validate the data, and does not determine the substance or scope of the rights. Two identical trademarks from different jurisdictions can therefore coexist under this scheme.
A major concern that has been raised is that the TMCH could eventually end up challenging the traditional system of trademark registrations, despite it being a voluntary and optional service.88 Despite that, ICANN has been vested with the authority to maintain an important database, and this vests them with plenty of discretionary power; some critics are concerned that the TMCH might end up substituting national trademark offices.89 This seems to be a little far-fetched as of now, given that inclusion into the TMCH is not an indication of anything but the validity of an existing trademark. This inclusion itself is based on existing trademarks vested by national trademark registries, and non-inclusion into the TMCH does not invalidate the trademark. No additional trademark rights are created through inclusion into the TMCH.
That said, it cannot be denied that the process of inclusion is costly, and is likely to favour those with larger brand presence and financial clout. There may also be significant confusion in this respect. For instance, the gTLD .microsoft may not create as much trademark confusion, given that ‘Microsoft’ is a fairly well known registered trademark. However, the gTLD .apple can lead to multiple SLD contentions, given that it is a generic word as well as one of the largest brands in the world. There is a real worry that given the clout and presence commanded by a large and established brand such as Apple Computers, Inc., their rights may be overwhelmingly prioritised over, say, the trademark held by a local apple selling establishment in a third world country. Further, say in the case of two identical trademarks in different industries, it would be unclear to the consumer which was being referred to – for instance, .guardian could refer to the insurance company “Guardian” or the news media house “Guardian”. The likelihood of confusion increases in this respect, especially with .brand trademarks. The Sunrise Policy, it has been argued, regards the DNS as flat namespace, and fosters consumer confusion, as well as allows parties the space to block multiple second-level domain names without necessarily using them, therefore promoting empty tracts of namespace.90
The TMCH Service provider has been given the discretion of determining what additional trademark-like information it wishes to include.91 There have also been privacy concerns raised about the TMCH Data.92 These could be resolved through the development of strong policy regarding the same – discussions are ongoing even now on RPMs and their reform.
Some people raise concerns about the need for the URS in the first place, arguing that the UDRP seems to be functioning effectively, and that the URS policy is heavily biased against the defendant.93
It is seen that while the UDRP policy was created with the intention of reducing the burden of litigation, and as a system for ensuring fairness for all rights holders, it has somehow evolved and morphed into a system which accords strong privileges to the rights of the trademark holders. Even though the definitions within the policy seem fairly narrow, the ways in which the UDRP panels have adjudicated have expanded the meaning of ‘bad-faith’ under the policy far beyond its original mandate. It must be kept in mind that the founding rationale behind the creation of the UDRP was to check the offence of malicious cybersquatting. This seems to have somehow given way to an expensive system which is unfavourable to the domain name holder even if he or she has absolutely no mala-fide intention of breaching trademark. The uniqueness of the domain name system does not seem to be guiding the adjudicators of the panels, and any suggestions at reforms must begin with a revamp of the constitution of the panels. Instead of focusing on how the trademark is used, there is a tendency to focus on if the trademark is used at all, and if this is coupled with default on the part of the respondent, his rights are not taken into account most of the time. This is an area within the UDRP that needs reform, and urgently, so that more equitable decisions are brought out to serve the original purpose of this policy.
With respect to the TMCH, it is early days yet. The major areas of concern here are with respect to brand confusion and the hegemony of well established brands over rightful identical trademark holders. It is important to keep in mind that while the Clearinghouse itself is not a dispute resolution system, it has bearings on whose rights are going to be prioritised, especially under the URS policy. Some have suggested that the URS policy be suspended altogether, given that it is so heavily biased towards the rights of the trademark holder, even more so than the UDRP. There may be some merit to that, as the existing UDRP already provides for a process, and with reform, ought to suffice for the purpose.
The purpose of these policies is not to foster monopolies over the Internet, a resource which has since its inception been based on principles of freedom, ease of access, and multistakeholderism. It is therefore imperative that the way forward find a balance between the interests of all the players in this ecosystem, and not give undue privilege to those with financial clout and brand value to make their presence felt.
Domain names are alphanumeric identifiers for websites, and they are exclusive in nature – no two website owners can have identical domain names.1 They have been defined as the alphanumeric text strings which follow the two slashes in a World Wide Web address.
A typical e-market place is a platform that connects sellers of products on one side with potential buyers on the other side. The platform provides the required glue between sellers and buyers. The cross-side network effect complements the same side networks in a 2SMP as shown.
Pricing is one of the important strategies in a 2SMP. Typically, one set of users is subsidised while the other set pays premium depending on the price elasticity of the demand. In a two-sided market with positive cross-side network effects, the platform provider, even if it is a monopolist, has an incentive to reduce platform profit. This is because in order to compete effectively on one side of the market, a platform needs to compete well on the other side. This creates a downward pressure on the prices offered to both sides compared to the case where no cross-side effects exist (Prasad & Sridhar, 2014).
A significant feature of the two-sided market is that one group of users choose to utilise only one platform, i.e., they are ‘single-home.’ The other group, in contrast, may ‘multi-home.’ For example, in the e-commerce platform illustration in Figure 1, typically consumers multi-home while sellers of merchandise single home. In such a market, if a seller wishes to interact with a consumer it has no choice but to interact with the consumer’s chosen platform. Thus platforms have monopoly power over providing access to their single-homing users for the multi-homing side. This leads to the possibility of high prices being charged to the multi-homing side. By contrast, platforms have to compete for single-homing users (i.e., sellers in the e-market place), and their high profits from the multi-homing side are to a large extent passed on to the single-homing side in the form of low prices or even zero prices. This is known as the ‘waterbed effect’ and has been demonstrated in analytical models like Economides and Tag (2012).
The prospect of increasing returns to scale in network industries especially in 2SMP, can lead to winner-takes-all battles, and in turn, result in a relatively less number of platform providers if not a monopoly altogether. An aspiring platform provider must thus consider whether to share its platform with rivals or fight to the death (Eisenmann, et al., 2006). Coping with platform competition is a two-step process as per Eisenmann, et al. (2006). First, is to determine whether their networked market is destined to be served by a single platform. When this is the case, the second step is in deciding whether to compete alone or share the platform in a co-opetition model.
A broad taxonomy of 2SMP is necessary to understand the scope, responsibility and liability of platform providers (Sridhar & Srikanth, 2015).
First are e-marketplace platforms that connect buyers with sellers that normally have an associated brand. These may be niche or horizontal in nature, covering large number of products and associated sellers. The platforms do some due diligence in selecting the sellers. The quality of products and services is taken seriously and the platform providers do provide enough information to the buyers, including ratings and facilities offered so that buyers make informed decisions. However, the payment for products and services are handled by the platform provider and the platform acts as a one-point contact for the customer. Moreover, these platform firms build brands through advertising and other means to attract both buyers and sellers and gain their trust. Due to this role, these platforms normally bear limited liability and responsibility for any errors in the completion of any transaction that passes through their platforms. For example, these platform providers normally have customer grievance cell and toll free numbers for such purpose. They also clearly state cancellation and refund policies for the transactions done through their platform.
Second, there are essentially directory services that enable customers to get information about the products or services. These firms enable buyers and sellers to meet and complete the transactions. Though these platform providers do some due diligence to select whom to list and also provide rating services, the onus on successful partnership between buyers and sellers normally does not fall on the platform provider. Given the low barriers to entry in such a service, there is often stiff competition in these types of platforms.
Third are aggregator platforms who sell goods and services under their e-brand, who do not have physical counter parts, who act as one-point contact for the customer, and who source their products and services both from recognised sellers/brands as well as from individuals and aggregators who may not have any brand or even physical presence. However, these platforms are the ones who promise to bring sanity and economic prosperity in countries such as India where the unorganised and informal sector plays an important role in the national economy. These platforms enable products and services to be made available which otherwise would not have been noticeable, and provide business opportunities for micro-entrepreneurs.
The responsibilities and liabilities of these platforms are still evolving. Hence there is a regulatory arbitrage that the 2SMP firms try to leverage at times (Rogers, 2015). Most of the transport aggregator firms describe themselves as “technology platform companies” and try to place themselves outside the ambit of regulation. The following figure illustrates how various competition and regulatory factors affect different types of 2SMP firms.
By playing a technology-based intermediation role, the 2SMP firms reduce the search cost for the users on either side of their platform. In an unorganised market, search costs are often very high. Even in a relatively organised market such as in the US, the taxi sectors suffer from high search costs; and Uber, through its platform-based approach, minimises the search costs for both cab seekers and drivers to find each other (Rogers, 2015). The platform firms can potentially solve high search cost and the resultant lower supply through effective intermediation. However, the question is, during this process, if they bypass extant regulation, what should the policy response be? Should the extant regulation be applicable, which in effect might increase search cost and reduce associated benefits? As given in Figure 2, directory services reduce search costs more than the other forms of 2SMPs.
The 2SMPs provide a way for either side users to connect with each other directly, thus reducing intermediation. In emerging countries, it is well known that intermediaries appropriate huge rents on each transaction between the two sets of users and thus reduce public benefits. Disintermediation also reduces search costs and improves the economic welfare of the two sets of users. The disintermediation is higher for directory services as they provide an easy way for the two sets of users to meet and complete their transactions.
As indicated earlier, platforms tend to get commoditised and the winner-takes-all nature of 2SMPs might lead to monopolies or cartels in the market place. Excessive market power can threaten consumer welfare. Thus, regulatory oversight and Significant Market Power assessment is needed to avoid predatory pricing, cartelisation, and abuse of dominant power. In general, the competition watchdog should frame rules that are appropriate for the 2SMPs, that is welfare enhance and does not reduce the public benefits of such market forms.
When firms in 2SMP aspire to become monopolies, they might engage in discrimination in either sets of users due to economic, political and personal reasons. There have been cases filed in the US against Uber for exhibiting such discriminatory behaviour (Rogers, 2015). What should be done against such possible discrimination?
There are also information privacy concerns due to collection of data at granular level by the e-commerce firms. While on one hand, the collection and analysis of consumer data improves personalisation of services, it might invade security and privacy of individuals. What is the appropriate regulatory framework for protecting the security and privacy of individuals interacting with 2SMPs while at the same leveraging the power of technology to provide better quality of experience?
As shown in Figure 2, the 2-market places have higher regulatory arbitrage compared to directory services due to nature and involvement in associated business activities.
Though firms in this space use the caveat of technology providers to reduce their liability, there is a set of minimum liability and responsibility clauses that the firms need to adhere to. For example, if one gets a spoilt food delivered by a platform provider, who should be liable – the one who made the delivery (i.e., platform) or the one who produced it (i.e., the seller). Today, most 2SMP companies provide reasonable options for returns and refunds in the case of deficient products or poor delivery service, but there are situations where the liability could, and should extend beyond just a refund. This is somewhat similar to the many cases where infringement of copyrighted material makes not only the infringer culpable but also the platform that facilitated the infringement. The answer to this question is not obvious. However, the use of the tort law to delineate between (i) intentional wrongdoing and (ii) carelessness and negligence in causing loss financially or otherwise to the victims—is essential in propounding strict or limited liability to the platform providers. There is limited literature on the risks and associated remedies of electronic market places. Weber (2014) discusses the moral hazard problems faced by the intermediaries, especially those that provide shared accommodation and shows how all stakeholders benefit from intermediated sharing of goods. As shown in Figure 2, the liability is more with the e-market places. The aggregators also have some limited liability.
The hypothesis is that due to the above forces that shape the economic value of platforms, it can be potentially high for aggregator services compared to the other two types of 2SMPs.
It is estimated that the radio taxi market in India is worth $6-9 billion dollars, growing at 17-20 percent every year. However, the organised market forms a small percentage of this market. It is estimated that the number of taxis in the organised sector will reach 30,000 by 2017 (Yourstory, 2014).
There are large unorganised taxi markets in most emerging countries, mainly due to lack of adequate public transport. Referred to as Radio Taxi Operators, a large number of fleet operators provide (i) airport service and (ii) inter-city service and (ii) intra-city service. These cab services are licensed by the State governments and tariff fixed by the rate card issued by the State transport department.
In India in 2010, a native alternate to Uber came up through Ola Cabs as an online marketplace for booking cabs and car rentals in the country. The founder, Bhavish Aggarwal, had said of his ambition: “To change how India travels and revolutionise personal transportation in the country”.
On 24 December 2014, an Uber cab driver in Delhi was apprehended by police after allegedly sexually assaulting a woman passenger. Uber’s first public response was that it would not take responsibility for the driver’s crime; it provoked public outrage against Uber and other taxi aggregators. After all, the taxi drivers who use these aggregators hold All India Permits. The government’s reaction therefore was that the cab aggregators are responsible for the drivers who use their platform and should be liable for any wrongdoing on the part of these drivers. Since then, it has not been easy for the taxi aggregators after a total ban was imposed on those who were providing taxi services without the Radio Taxi Operator licence. Their operations were cancelled in many states including in the cities of Bangalore and Delhi, since they were without the Radio Taxi license in some locations. While Ola and Uber argue that they are technology companies, not taxi operators, and thus come under the jurisdiction of the information technology law, the government stands its ground.
However, in a landmark judgement on 15 July 2015, the Delhi High Court suggested to the city government to do away with the ban on app-based cab service providers, saying they cannot be blamed for any illegal acts committed by the cab drivers, who are holders of All India Permits (AIP) granted by the authorities concerned. The Supreme Court reiterated that “no commercial vehicles shall ply in Delhi unless converted to Single Fuel Mode of CNG (compressed natural gas) with effect from 01.04.2001.,” Justice Jayant Nath said in his order (ET, 2015). Ola cab is now trying to convert all diesel/ petrol cabs that are on its platform to CNG to continue its operations in Delhi.
The Karnataka Transport Secretariat was one of the first in the country to come out with “The Karnataka on-demand Transportation Technology Aggregators Rules, 2016” on 2 April 2016 (KTS, 2016). The salient features of this notification are discussed each in turn, below:/p>
1. “Aggregator” means a person who is an aggregator, or operator or an intermediary/market place who canvasses or solicits or facilitates passengers for travel by a taxi and who connects the passenger/ intending passenger to a driver of a taxi through phone calls, internet, web based services or GPS/GPRS based services whether or not any fare, fee, commission, brokerage or other charges are collected for providing such services.
a. Remark: The definition also includes operator who might own physical taxis apart from market places.
2. All on-demand transport aggregators shall obtain a license for valid operation as per these rules.
a. Remark: The aggregators come under licensing regime and hence under rules and regulations as attributable under the license conditions. This reduces the regulatory arbitrage for aggregator market places and brings them on par with operators and leasing services.
3. The aggregator shall have a fleet of minimum 100 taxies either owned or through an agreement with individual Taxi permit holders.
a. Remark: This increases the entry barriers for market place aggregators.
4. The aggregator has facilities for monitoring the movement of taxies with the help of GPS, GPRS, along with a control room facility.
a. Remark: This placed the responsibility of location awareness on the licensee. It is to be noted that market place based aggregators use these consumer tracking and monitoring facilities. However, this clause, forces the existing cab operators to adhere to the safety guidelines thus increasing the consumer security and benefits.
5. Every cab shall be fitted with a yellow coloured display board with words “Taxi” visible both from the front and the rear.
a. Remark: No private vehicles can be run by the aggregator. This also increase the responsibilities of the aggregator in running the cab service.
6. The driver shall have a minimum driving experience of 2 years. He shall be of a good moral character without any criminal record.
a. Remark: This placed the responsibility of selecting drivers with the associated qualifications on the aggregator. This is likely to improve the safety and security of the passengers.
7. In any case, the fare including any other changes, if any, shall not be higher than the fare fixed by the Government from time to time. No passenger shall be charged for dead mileage and the fare shall be charged only from the point of boarding to the point of alighting.
a. Remark: Surge pricing practiced widely by the aggregators is not allowed beyond the ceiling as fixed by the Government. This defeats the very purpose of supply-demand based pricing models. However, making the aggregator liable for charging very high when the demand exceeds supply, transfers some benefits from the aggregator to cab users at peak times.
8. The aggregator shall maintain records, in digital form of all the taxies at his control, indicating on a day to day basis, the trips operated by each vehicle, details of passengers who travelled in the vehicle, origin and destination of the journey and the fare collected. The records so maintained shall be open for inspection by an officer nominated by the licensing authority at any time.
a. Remark: All account keeping responsibilities are transferred to the aggregator. This increase consumer benefits.
The above rules have the following effects on pure play aggregators such as Uber:
1. Regulatory arbitrage: Reduced as the licensing regime brings the pure-play aggregators on par with regular cab operators.
2. Cost of operations: Increase in operational cost for the aggregator in complying with the rules.
3. Market efficiency: Cap on pricing deviates from the market-determined pricing, matching supply and demand and leading to possible inefficiencies. However, Uber can continue to derive benefits by matching supply and demand using technologies on real time basis.
4. Competition: Owned and leased taxi car operators will also be considered on par with pure play aggregators. Hence the competition increases.
5. Social benefits: The liability and responsibility of the aggregators has been increased as there is no cap on the number of trips and hence the utilisation of the drivers’ time, and in turn improving the utilisation of drivers. The cap on pricing may have a negative effect on drivers’ income. For the consumer, the benefits increase due to improved security, monitoring and ceiling on pricing.
Overall, 2SMPs generate consumer surplus, mitigate day-to-day problems of common citizens, reduce the information asymmetry problems and thus the appropriation of rents, and provide better job opportunities for blue collar workers and those at the bottom of the pyramid. Though Venture Capital (VC) funds have played their role in these platforms scaling up, the sustainability depends on the theories and outcomes of 2SMPs, including pricing strategies, stickiness of both sides of users, and the magnitude of cross side network effects.
Apart from the sustainability of these 2SMPs, the firms also have to deal with regulatory and policy dimensions that are still evolving. Various countries across the world have been trying to extend the applicability of the extant laws and regulations to this platform economy. While the responsibilities and liabilities of these platforms are still evolving under the extant laws, the platforms cannot take a “caveat emptor” approach of their legal position on their website (Sridhar & Srikanth, 2015). Thus, there is a need for a balanced approach to dealing with 2SMPs and associated regulation and policies.
The following are certain policy guidelines for dealing with 2SMPs.
1. Significant Market Power Assessment: Much like in any other sector, Significant Market Power (SMP) assessment shall be done periodically by the regulator as or when needed to prevent market dominance by one or a few set of players in the specified market space.
2. Light touch regulation and mandatory compliance: Since social benefits tend to be higher, the 2SMPs shall be treated with light touch regulation. A subset of existing rules shall be defined as the mandatory rules. The minimum liability and responsibility shall be defined for the mandatory compliance part of the regulation. For example, if taxi cabs need to run only on environmental friendly Liquefied Petroleum Gas (LPG), the rule should be equally applicable to those recruited by the taxi cab aggregator such as Ola Cabs. Regulatory arbitrage should not be extended in this case as the social and public harm due to highly polluting Petrol/ Diesel run cabs exceed the benefits due to disintermediation and lower search costs.
3. Fare regulation: Though platforms tend to create larger providers, fare regulation should not be extended to 2SMPs. Since the platforms are easily replicable and there is often less barriers to entry, any predatory pricing or additional rent extraction by the incumbents will be thwarted by new entrants.
4. Market share regulation: Any horizontal and vertical integration shall not be discouraged by regulation. The platforms tend to attain economies of scale and scope only through integration. Due to increasing returns to scale of such platforms, integration is likely to increase social benefits. Integration also brings about the much needed organisation to the informal sector, thus improving accountability, responsibility and ownership of the market place, and providing superior benefits to both consumers and producers on the platform as pointed out by Rogers (2015).
5. Regulatory levies: Regulatory levies increase cost of providing platform service which in turn reduces the social benefits. The regulatory agency must thus be careful to avoid levying additional tax on 2SMP providers.
6. Investment regulation: The high potential of these 2SMP firms and their global characteristic have attracted foreign investments. Any restriction on ownership and Foreign Direct Investment (FDI) will affect the much needed capital for the start-ups to scale up their operations.
As has been witnessed in many cases, the incumbents often feel threatened by the 2SMP providers and lobby against them. The regulators and policymakers should be cognizant of the fact that resultant social benefits due to 2SMPs tends to be higher and hence not be swayed by the incumbents. Keeping a watch over the market, at the same time nurture the proliferation of the 2SMPs through light-touch regulation is the key for embracing the new technology revolution for larger social benefits.
There have also been criticisms against the 2SMPs. The simplicity of 2SMPs and, in turn, of the analytical frameworks that revolve around them, were questioned in the context of Internet broadband provisioning by Sridhar & Prasad (2016). As these 2SMPs evolve into multi-sided platforms, the economics and associated regulations have to be restructured. Further, the competition regulation also has to evolve to address the various issues including vertical integration, price discrimination, and market contestability.
Compass. (2015). The global start-up eco system ranking 2015. Available at: http://startup-ecosystem.compass.co/ser2015/ accessed on 21 Aug 2015.
Economides, N., and J. Tag. (2012). ‘Net neutrality on the Internet: A two sided market analysis’, Information Economics and Policy 24(2): 91–104.
Economic Times (ET). (12 Aug 2015). Ola’s plea against Delhi government’s ban gets dismissed. Available at: http://tech.economictimes.indiatimes.com/news/mobile/delhi-high-court-dismisses-ola-plea/48447122 accessed on 20 Aug 2015.
Eisenmann, T., Parker, G., and Van Alstyne, M.W. (2006). Strategies for Two- Sided Markets. Harvard Business Review.
Karnataka Transport Secretariat (KTS). (2016). The Karnataka on-demand Transportation Technology Aggregators Rules, 20 I 6, published on 2 April 2016. Available at: http://rto.kar.nic.in/Aggreegator.pdf accessed on 30 Jun 2016.
Kumar, K., & Du, H. (2011). JustDial: Reducing the Digital Divide through an ICT-Enabled Application of Appropriate Technology and Fortune-Seeking Behavior at the Bottom of the Pyramid. Economic Journal, 84(333), 491-542.
Prasad, R., and Sridhar, V. (2014). The Dynamics of Spectrum Management: Legacy, Technology, and Economics. Oxford University Press, ISBN-13: 978-0-19-809978-9; ISBN-10: 0-19-809978-9.
Rochet, J. C., & Tirole, J. (2003). Platform competition in two-sided markets. Journal of the European Economic Association, 990-1029.
Rogers, B. (2015). The social costs of Uber. University of Chicago Law Review Dialogue 85. Available at: https://lawreview.uchicago.edu/page/social-costs-uber accessed on 25 Nov 2015.
Sridhar, V. (2012). Telecom Revolution in India: Technology, Regulation and Policy. New Delhi, India: Oxford University Press, ISBN-13: 978-0-19-807553-0; ISBN-10: 0-19-807553-7.
Sridhar, V. (5 August 2014). Heads it’s Flipkart; Tails it’s Amazon. Economic Times.
Sridhar, V., Prasad, R. (11 February 2016). It is time to redefine net neutrality. Hindu Business Line.
Sridhar, V., Srikanth, T.K. (13 January 2015). Who is liable in platform business? Financial Express.
Weber, T. A. (2014). Intermediation in a Sharing Economy: Insurance, Moral Hazard, and Rent Extraction. Journal of Management Information Systems, 31(3), 35-71.
World Economic Forum (WEF). (2015). The Global Information Technology Report 2015. Available at: http://www3.weforum.org/docs/WEF_Global_IT_Report_2015.pdf accessed on 2- Aug 2015.
Yourstory. (2014). Clash of the Titans: Ola vs. TaxiForSure vs. Uber. Available at http://yourstory.com/2014/06/ola-taxiforsure-uber/ accessed on 20 Aug 2015.&p[url]=http://www.digitalpolicy.org/two-sided-markets-platforms-and-policies/&&p[images]=http://www.digitalpolicy.org/wp-content/uploads/2016/07/I-3-1-150x150.jpg" target="_blank">
The Internet companies, often referred to as Over-The-Top (OTT) firms have powered “sharing economies” around the world (Weber, 2014). Two Sided Markets (2SM) and associated Platforms (P) form the basis of operation of these firms. In a typical 2SMP
The creation of the Internet Governance Forum (IGF) was a watershed moment in the history of the Internet. This article focuses on the evolution of the forum in the last decade. It does not try to give a historical account of events,
PRIVACY & DATA PROTECTION
on 12 September, 2016
A specialist neurology clinic in Japan has this revealing signage with its website address prominently proclaiming its function (photo courtesy: #DomainsInTheWild). What is significant about this? The top level domain (TLD) name (.clinic) is precise and resonates. A company that makes smart earplugs using cutting-edge research and high-end technology has the following website address: http://hush.technology. It […]
on 1 August, 2016
Abstract Critical information infrastructure (CII) is a pillar on which modern nations function. The revolution in information and communication technologies (ICT), apart from enhancing societal interactions and information diffusion, has improved the efficiency of organisations in all spheres of activity. These technologies, when developed, were guided by the need of open society for speed and […]
PRIVACY & DATA PROTECTION
on 6 February, 2017
Abstract Encrypting communications enhances privacy and the security of information services. This, in turn, incentivises innovation in the ICT sector and contributes significantly to the growth of the internet economy. India’s (now withdrawn) Draft National Encryption Policy was single-minded in its approach. It sought only to prescribe standards that would enable law enforcement agencies to access […]
on 27 October, 2016
Only eight years after India passed the Information Technology Act, did the term cybersecurity appear in a statute through a series of amendments to the Act approved by the Indian Parliament. In 2008, the amendments recognised the need for a focussed approach to cybersecurity and divided it into two segments: Critical and Non Critical. The […]
on 7 March, 2017
No longer the subject of science fiction, Artificial Intelligence (AI) is profoundly transforming our daily lives. While computers have been mimicking human intelligence already for some decades using logic and if-then kind of rules, massive increases in computational power are now facilitating the creation of ‘deep learning’ machines i.e. algorithms that permit software to train […]