header search icon

Technology & Innovation

  • Financial services
  • Strategy and Leadership
  • Sustainability
  • All Sections +

How to think about computers in the 21st century

computer in 21st century essay

Social Share:

computer in 21st century essay

Stephen Lake is a biomedical engineer, inventor, and entrepreneur in emerging technologies. He is the Co-founder and CEO of Thalmic Labs, the company behind the groundbreaking Myo armband, which measures the electrical activity in your muscles to wirelessly control computers, phones, and other digital technologies with gestures and motion. He envisions a future of seamless interaction between humans and machines, where wearable computing interfaces seamlessly blend the real and digital worlds. Stephen has taken his revolutionary concept to an actual product with more than $15 million in funding, close to 100 employees, many patent filings, and over 50,000 users worldwide.

In 2014, two huge investments into wearable technology made headlines in the tech world: Google led a US$542 million investment round into startup Magic Leap, and Facebook bought Oculus Rift for the hefty sum of US$2 billion.

In 2015, investment into wearable technology continued. Fossil Group agreed to acquire Misfit, a maker of wearable activity trackers, for US$260 million, and Intel acquired the wearable heads-up display manufacturer, Recon Instruments, for US$175 million. Fitbit, another wearable activity tracker maker, raised US$732 million in an IPO which valued the company at over US$4 billion.

Some of the smartest minds are making serious bets that the way people interact with technology is about to radically transform.

And why shouldn’t it? Our interactions with technology are long due for a makeover.

Stagnant innovation

Thanks to Moore’s Law , computers have been getting exponentially faster and smaller. Their form, however, hasn’t changed nearly as quickly.

The smartphone takeover of the early 2000s is the most recent revolution in computing but despite a powerful new pocket size, the interaction has barely changed. The input of text on a miniaturized version of a classic “QWERTY” keyboard is old news. There is an array of applications on a miniature desktop which open and close with taps rather than clicks. It’s exactly what we’re used to, just smaller.

The mother of all demos

1968 was a remarkable year. Doug Engelbart and his team of 17 at the Augmentation Research Centre at Stanford Research Institute in Menlo Park had just created their own radical vision for the future.

On December 9th, Engelbart showed off their work in a seminal 90 minute live demonstration, now lovingly referred to as “ the mother of all demos ”. In it, he demonstrated virtually all of the major aspects of a modern Human-Computer Interaction (HCI) system. From visual, structured menus for navigating information, to the mouse and keyboard as inputs, Engelbart revealed it all.

Which is more remarkable: how right Engelbart and his team got it the first time, or how little has changed in nearly six decades since then?

A metaphor is a powerful thing

The answer to that question lies in another: if we had all the basic ingredients for a modern computer in 1968, why did we have to wait until the 1980s to get one?

The IBM 5150 didn’t arrive in homes and offices until 1981, despite the fact that Engelbart gave us the ingredients in 1968 and computers were used to plan the Apollo 11 moon landing and predict the 1969 presidential election. If the hardware existed and the value was obvious, why was adoption so slow?

The main answer lies in the fact that people did not get it.  The only way to interact with a computer back then was through a command-line interface (CLI). You had to tell the computer, action-by-action and step-by-step, exactly what you wanted it to do. We had no idea what was happening inside the computer, no mental model to make its inner workings understandable. Before that, human-computer interaction involved putting punch cards in the right order.

What Engelbart and his team were missing was a metaphor: a way for humans and machines to understand each other. They got one in the world’s most famous Graphical User Interface (GUI): the desktop.

This was the single most important revelation for making computers mainstream, creating enough demand to drive down the cost and make personal computers accessible to the average consumer. The Apple Lisa introduced a menu bar and window controls in 1983, and by 1985 the Atari ST and Commodore Amiga were created. Since then, apart from performance and size, little has changed.  Keyboards for text input, pointers for targeting, and flat desktops for storing things are all similar. Making computers smaller has drastically transformed where people do their computing, but not how. The biggest input change has been learning to type on a QWERTY keyboard with our thumbs.

A new metaphor for a new computer

It’s time for a new metaphor. We have changed what counts as a computer, but not how people think about them.

Maybe this time, the technology world can start with the metaphor, making technological marvels accessible to all from day one.

The views and opinions expressed in this article are those of the authors and do not necessarily reflect the views of The Economist Intelligence Unit Limited (EIU) or any other member of The Economist Group. The Economist Group (including the EIU) cannot accept any responsibility or liability for reliance by any person on this article or any of the information, opinions or conclusions set out in the article.

Related content

computer in 21st century essay

Accelerating urban intelligence: People, business and the cities of tomorro...

About the research

Accelerating urban intelligence: People, business and the cities of tomorrow is an Economist Intelligence Unit report, sponsored by Nutanix. It explores expectations of citizens and businesses for smart-city development in some of the world’s major urban centres. The analysis is based on two parallel surveys conducted in 19 cities: one of 6,746 residents and another of 969 business executives. The cities included are Amsterdam, Copenhagen, Dubai, Frankfurt, Hong Kong, Johannesburg, London, Los Angeles, Mumbai, New York, Paris, Riyadh, San Francisco, São Paulo, Singapore, Stockholm, Sydney, Tokyo and Zurich.

Respondents to the citizen survey were evenly balanced by age (roughly one-third in each of the 18-38, 39-54 and 55 years and older age groups) and gender. A majority (56%) had household incomes above the median level in their city, with 44% below it. Respondents to the business survey were mainly senior executives (65% at C-suite or director level) working in a range of different functions. They work in large, midsize and small firms in over a dozen industries. See the report appendix for full survey results and demographics.

Additional insights were obtained from indepth interviews with city officials, smart-city experts at NGOs and other institutions, and business executives. We would like to thank the following individuals for their time and insights.

The report was written by Denis McCauley and edited by Michael Gold.

computer in 21st century essay

Talent for innovation

Talent for innovation: Getting noticed in a global market incorporates case studies of the 34 companies selected as Technology Pioneers in biotechnology/health, energy/environmental technology, and information technology.

Leonardo Da Vinci unquestionably had it in the 15th century; so did Thomas Edison in the 19th century. But today, "talent for innovation" means something rather different. Innovation is no longer the work of one individual toiling in a workshop. In today's globalised, interconnected world, innovation is the work of teams, often based in particular innovation hotspots, and often collaborating with partners, suppliers and customers both nearby and in other countries.

Innovation has become a global activity as it has become easier for ideas and talented people to move from one country to another. This has both quickened the pace of technological development and presented many new opportunities, as creative individuals have become increasingly prized and there has been greater recognition of new sources of talent, beyond the traditional innovation hotspots of the developed world.

The result is a global exchange of ideas, and a global market for innovation talent. Along with growth in international trade and foreign direct investment, the mobility of talent is one of the hallmarks of modern globalisation. Talented innovators are regarded by companies, universities and governments as a vital resource, as precious as oil or water. They are sought after for the simple reason that innovation in products and services is generally agreed to be a large component, if not the largest component, in driving economic growth. It should be noted that "innovation" in this context does not simply mean the development of new, cutting-edge technologies by researchers.

It also includes the creative ways in which other people then refine, repackage and combine those technologies and bring them to market. Indeed, in his recent book, "The Venturesome Economy", Amar Bhidé, professor of business at Columbia University, argues that such "orchestration" of innovation can actually be more important in driving economic activity than pure research. "In a world where breakthrough ideas easily cross national borders, the origin of ideas is inconsequential," he writes. Ideas cross borders not just in the form of research papers, e-mails and web pages, but also inside the heads of talented people. This movement of talent is not simply driven by financial incentives. Individuals may also be motivated by a desire for greater academic freedom, better access to research facilities and funding, or the opportunity to work with key researchers in a particular field.

Countries that can attract talented individuals can benefit from more rapid economic growth, closer collaboration with the countries where those individuals originated, and the likelihood that immigrant entrepreneurs will set up new companies and create jobs. Mobility of talent helps to link companies to sources of foreign innovation and research expertise, to the benefit of both. Workers who emigrate to another country may bring valuable knowledge of their home markets with them, which can subsequently help companies in the destination country to enter those markets more easily. Analysis of scientific journals suggests that international co-authorship is increasing, and there is some evidence thatcollaborative work has a greater impact than work carried out in one country. Skilled individuals also act as repositories of knowledge, training the next generation and passing on their accumulated wisdom.

But the picture is complicated by a number of concerns. In developed countries which have historically depended to a large extent on foreign talent (such as the United States), there is anxiety that it is becoming increasingly difficult to attract talent as new opportunities arise elsewhere. Compared with the situation a decade ago, Indian software engineers, for example, may be more inclined to set up a company in India, rather than moving to America to work for a software company there. In developed countries that have not historically relied on foreign talent (such as Germany), meanwhile, the ageing of the population as the birth rate falls and life expectancy increases means there is a need to widen the supply of talent, as skilled workers leave the workforce and young people show less interest than they used to in technical subjects. And in developing countries, where there is a huge supply of new talent (hundreds of thousands of engineers graduate from Indian and Chinese universities every year), the worry is that these graduates have a broad technical grounding but may lack the specialised skills demanded by particular industries.

Other shifts are also under way. The increasing sophistication of emerging economies (notably India and China) is overturning the old model of "create in the West, customise for the East". Indian and Chinese companies are now globally competitive in many industries. And although the mobility of talent is increasing, workers who move to another country are less likely to stay for the long-term, and are more likely to return to their country of origin. The number of Chinese students studying abroad increased from 125,000 in 2002 to 134,000 in 2006, for example, but the proportion who stayed in the country where they studied after graduating fell from 85% to 69% over the same period, according to figures from the OECD (see page 10).

What is clear is that the emergence of a global market for talent means gifted innovators are more likely to be able to succeed, and new and unexpected opportunities are being exploited, as this year's Technology Pioneers demonstrate. They highlight three important aspects of the global market for talent: the benefits of mobility, the significant role of diasporas, and the importance of network effects in catalysing innovation.

Brain drain, or gain?

Perhaps the most familiar aspect of the debate about flows of talent is the widely expressed concern about the "brain drain" from countries that supply talented workers. If a country educates workers at the taxpayers' expense, does it not have a claim on their talent? There are also worries that the loss of skilled workers can hamper institutional development and drive up the cost of technical services. But such concerns must be weighed against the benefits of greater mobility.

There are not always opportunities for skilled individuals in their country of birth. The prospect of emigration can encourage the development of skills by individuals who may not in fact decide to emigrate. Workers who emigrate may send remittances back to their families at home, which can be a significant source of income and can help to alleviate poverty. And skilled workers may return to their home countries after a period working abroad, further stimulating knowledge transfer and improving the prospects for domestic growth, since they will maintain contacts with researchers overseas. As a result, argues a recent report from the OECD, it makes more sense to talk of a complex process of "brain circulation" rather than a one-way "brain drain". The movement of talent is not simply a zero-sum gain in which sending countries lose, and receiving countries benefit. Greater availability and mobility of talent opens up new possibilities and can benefit everyone.

Consider, for example, BioMedica Diagnostics of Windsor, Nova Scotia. The company makes medical diagnostic systems, some of them battery-operated, that can be used to provide health care in remote regions to people who would otherwise lack access to it. It was founded by Abdullah Kirumira, a Ugandan biochemist who moved to Canada in 1990 and became a professor at Acadia University. There he developed a rapid test for HIV in conjunction with one of his students, Hermes Chan (a native of Hong Kong who had moved to Canada to study). According to the United States Centers for Disease Control, around one-third of people tested for HIV do not return to get the result when it takes days or weeks to determine. Dr Kirumira and Dr Chan developed a new test that provides the result in three minutes, so that a diagnosis can be made on the spot. Dr Kirumira is a prolific inventor who went on to found several companies, and has been described as "the pioneer of Nova Scotia's biotechnology sector".

Today BioMedica makes a range of diagnostic products that are portable, affordable and robust, making them ideally suited for use in developing countries. They allow people to be rapidly screened for a range of conditions, including HIV, hepatitis, malaria, rubella, typhoid and cholera. The firm's customers include the World Health Organisation. Providing such tests to patients in the developing world is a personal mission of Dr Kirumira's, but it also makes sound business sense: the market for invitro diagnostics in the developing world is growing by over 25% a year, the company notes, compared with growth of only 5% a year in developed nations.

Moving to Canada gave Dr Kirumira research opportunities and access to venture funding that were not available in Uganda. His innovations now provide an affordable way for hospitals in his native continent of Africa to perform vital tests. A similar example is provided by mPedigree, a start-up that has developed a mobile-phone-based system that allows people to verify the authenticity of medicines. Counterfeit drugs are widespread in the developing world: they are estimated to account for 10-25% of all drugs sold, and over 80% in some countries. The World Health Organisation estimates that a fake vaccine for meningitis, distributed in Niger in 1995, killed over 2,500 people. mPedigree was established by Bright Simons, a Ghanaian social entrepreneur, in conjunction with Ashifi Gogo, a fellow Ghanaian. The two were more than just acquaintances having met at Secondary School. There are many high-tech authentication systems available in the developed world for drug packaging, involving radio-frequency identification (RFID) chips, DNA tags, and so forth.

The mPedigree system developed my Mr Gogo, an engineering student, is much cheaper and simpler and only requires the use of a mobile phone — an item that is now spreading more quickly in Africa than in any other region of the world. Once the drugs have been purchased, a panel on the label is scratched off to reveal a special code. The patient then sends this code, by text message, to a particular number. The code is looked up in a database and a message is sent back specifying whether the drugs are genuine. The system is free to use because the drug companies cover the cost of the text messages. It was launched in Ghana in 2007, and mPedigree's founders hope to extend it to all 48 sub-Saharan African countries within a decade, and to other parts of in the developing world.

The effort is being supported by Ghana's Food and Drug Board, and by local telecoms operators and drug manufacturers. Mr Gogo has now been admitted into a special progamme at Dartmouth College in the United States that develops entrepreneurial skills, in addition to technical skills, in engineers. Like Dr Kirumira, he is benefiting from opportunities that did not exist in his home country, and his country is benefiting too. This case of mPedigree shows that it is wrong to assume that the movement of talent is one-way (from poor to rich countries) and permanent. As it has become easier to travel and communications technology has improved, skilled workers have become more likely to spend brief spells in other countries that provide opportunities, rather than emigrating permanently.

And many entrepreneurs and innovators shuttle between two or more places — between Tel Aviv and Silicon Valley, for example, or Silicon Valley and Hsinchu in Taiwan — in a pattern of "circular" migration, in which it is no longer meaningful to distinguish between "sending" and "receiving" countries.

The benefits of a diaspora

Migration (whether temporary, permanent or circular) to a foreign country can be facilitated by the existence of a diaspora, since it can be easier to adjust to a new culture when you are surrounded by compatriots who have already done so. Some observers worry that diasporas make migration too easy, in the sense that they may encourage a larger number of talented individuals to leave their home country than would otherwise be the case, to the detriment of that country.

But as with the broader debate about migration, this turns out to be only part of the story. Diasporas can have a powerful positive effect in promoting innovation and benefiting the home country. Large American technology firms, for example, have set up research centres in India in part because they have been impressed by the calibre of the migrant Indian engineers they have employed in America. Diasporas also provide a channel for knowledge and skills to pass back to the home country.

James Nakagawa, a Canadian of Japanese origin and the founder of Mobile Healthcare, is a case in point. A third-generation immigrant, he grew up in Canada but decided in 1994 to move to Japan, where he worked for a number of technology firms and set up his own financial-services consultancy. In 2000 he had the idea that led him to found Mobile Healthcare, when a friend was diagnosed with diabetes and lamented that he found it difficult to determine which foods to eat, and which to avoid.

The rapid spread of advanced mobile phones in Japan, a world leader in mobile telecoms, prompted Mr Nakagawa to devise Lifewatcher, Mobile Healthcare's main product. It is a "disease selfmanagement system" used in conjunction with a doctor, based around a secure online database that can be accessed via a mobile phone. Patients record what medicines they are taking and what food they are eating, taking a picture of each meal. A database of common foodstuffs, including menu items from restaurants and fast-food chains, helps users work out what they can safely eat. Patients can also call up their medical records to follow the progress of key health indicators, such as blood sugar, blood pressure, cholesterol levels and calorie intake.

All of this information can also be accessed online by the patient's doctor or nutritionist. The system allows people with diabetes or obesity (both of which are rapidly becoming more prevalent in Japan and elsewhere) to take an active role in managing their conditions. Mr Nakagawa did three months of research in the United States and Canada while developing Lifewatcher, which was created with support from Apple (which helped with hardware and software), the Japanese Red Cross and Japan's Ministry of Health and Welfare (which provided full access to its nutritional database).

Japanese patients who are enrolled in the system have 70% of the cost covered by their health insurance. Mr Nakagawa is now working to introduce Lifewatcher in the United States and Canada, where obesity and diabetes are also becoming more widespread — along advanced mobile phones of the kind once only found in Japan. Mr Nakagawa's ability to move freely between Japanese and North American cultures, combining the telecoms expertise of the former with the entrepreneurial approach of the latter, has resulted in a system that can benefit both.

The story of Calvin Chin, the Chinese-American founder of Qifang, is similar. Mr Chin was born and educated in America, and worked in the financial services and technology industries for several years before moving to China. Expatriate Chinese who return to the country, enticed by opportunities in its fast-growing economy, are known as "returning turtles". Qifang is a "peer to peer" (P2P) lending site that enables students to borrow money to finance their education from other users of the site. P2P lending has been pioneered in other countries by sites such as Zopa and Prosper in other countries.

Such sites require would-be borrowers to provide a range of personal details about themselves to reassure lenders, and perform credit checks on them. Borrowers pay above-market rates, which is what attracts lenders. Qifang adds several twists to this formula. It is concentrating solely on student loans, which means that regulators are more likely to look favourably on the company's unusual business model. It allows payments to be made directly to educational institutions, to make sure the money goes to the right place. Qifang also requires borrowers to give their parents' names when taking out a loan, which increases the social pressure on them not to default, since that would cause the family to lose face.

Mr Chin has thus tuned an existing business model to take account of the cultural and regulatory environment in China, where P2P lending could be particularly attractive, given the relatively undeveloped state of China's financial-services market. In a sense, Qifang is just an updated, online version of the community group-lending schemes that are commonly used to finance education in China. The company's motto is that "everyone should be able to get an education, no matter their financial means".

Just as Mr Chin is trying to use knowledge acquired in the developed world to help people in his mother country of China, Sachin Duggal hopes his company, Nivio, will do something similar for people in India. Mr Duggal was born in Britain and is of Indian extraction. He worked in financial services, including a stint as a technologist at Deutsche Bank, before setting up Nivio, which essentially provides a PC desktop, personalised with a user's software and documents, that can be accessed from any web browser.

This approach makes it possible to centralise the management of PCs in a large company, and is already popular in the business world. But Mr Duggal hopes that it will also make computing more accessible to people who find the prospect of owning and managing their own PCs (and dealing with spam and viruses) too daunting, or simply cannot afford a PC at all. Nivio's software was developed in India, where Mr Duggal teamed up with Iqbal Gandham, the founder of Net4India, one of India's first internet service providers. Mr Duggal believes that the "virtual webtop" model could have great potential in extending access to computers to rural parts of India, and thus spreading the opportunities associated with the country's high-tech boom. A survey of the bosses of Indian software firms clearly shows how diasporas can promote innovation.

It found that those bosses who had lived abroad and returned to India made far more use of diaspora links upon their return than entrepreneurs who had never lived abroad, which gave them access to capital and skills in other countries. Diasporas can, in other words, help to ensure that "brain drain" does indeed turn into "brain gain", provided the government of the country in question puts appropriate policies in place to facilitate the movement of people, technology and capital.

Making the connection

Multinational companies can also play an important role in providing new opportunities for talented individuals, and facilitating the transfer of skills. In recent years many technology companies have set up large operations in India, for example, in order to benefit from the availability of talented engineers and the services provided by local companies. Is this simply exploitation of low-paid workers by Western companies?

The example of JiGrahak Mobility Solutions, a start-up based in Bangalore, illustrates why it is not. The company was founded by Sourabh Jain, an engineering graduate from the Delhi Institute of Technology. After completing his studies he went to work for the Indian research arm of Lucent Technologies, an American telecoms-equipment firm. This gave him a solid grounding in mobile-phone technology, which subsequently enabled him to set up JiGrahak, a company that provides a mobile-commerce service called Ngpay.

In India, where many people first experience the internet on a mobile phone, rather than a PC, and where mobile phones are far more widespread than PCs, there is much potential for phone-based shopping and payment services. Ngpay lets users buy tickets, pay bills and transfer money using their handsets. Such is its popularity that with months of its launch in 2008, Ngpay accounted for 4% of ticket sales at Fame, an Indian cinema chain.

The role of large companies in nurturing talented individuals, who then leave to set up their own companies, is widely understood in Silicon Valley. Start-ups are often founded by alumni from Sun, HP, Oracle and other big names. Rather than worrying that they could be raising their own future competitors, large companies understand that the resulting dynamic, innovative environment benefits everyone, as large firms spawn, compete with and acquire smaller ones.

As large firms establish outposts in developing countries, such catalysis of innovation is becoming more widespread. Companies with large numbers of employees and former employees spread around the world can function rather like a corporate diaspora, in short, providing another form of network along which skills and technology can diffuse. The network that has had the greatest impact on spreading ideas, promoting innovation and allowing potential partners to find out about each other's research is, of course, the internet. As access to the internet becomes more widespread, it can allow developing countries to link up more closely with developed countries, as the rise of India's software industry illustrates. But it can also promote links between developing countries.

The Cows to Kilowatts Partnership, based in Nigeria, provides an unusual example. It was founded by Joseph Adelagan, a Nigerian engineer, who was concerned about the impact on local rivers of effluent from the Bodija Market abattoir in Ibadan. As well as the polluting the water supply of several nearby villages, the effluent carried animal diseases that could be passed to humans. Dr Adelagan proposed setting up an effluent-treatment plant.

He discovered, however, that although treating the effluent would reduce water pollution, the process would produce carbon-dioxide and methane emissions that contribute to climate change. So he began to look for ways to capture these gases and make use of them. Researching the subject online, he found that a research institution in Thailand, the Centre for Waste Utilisation and Management at King Mongkut University of Technology Thonburi, had developed anaerobic reactors that could transform agro-industrial waste into biogas. He made contact with the Thai researchers, and together they developed a version of the technology

suitable for use in Nigeria that turns the abattoir waste into clean household cooking gas and organic fertiliser, thus reducing the need for expensive chemical fertiliser. The same approach could be applied across Africa, Dr Adelagan believes. The Cows to Kilowatts project illustrates the global nature of modern innovation, facilitated by the free movement of both ideas and people. Thanks to the internet, people in one part of the world can easily make contact with people trying to solve similar problems elsewhere.

Lessons learned

What policies should governments adopt in order to develop and attract innovation talent, encourage its movement and benefit from its circulation? At the most basic level, investment in education is vital. Perhaps surprisingly, however, Amar Bhidé of Columbia University suggests that promoting innovation does not mean pushing as many students as possible into technical subjects.

Although researchers and technologists provide the raw material for innovation, he points out, a crucial role in orchestrating innovation is also played by entrepreneurs who may not have a technical background. So it is important to promote a mixture of skills. A strong education system also has the potential to attract skilled foreign students, academics and researchers, and gives foreign companies an incentive to establish nearby research and development operations.

Many countries already offer research grants, scholarships and tax benefits to attract talented immigrants. In many cases immigration procedures are "fast tracked" for individuals working in science and technology. But there is still scope to remove barriers to the mobility of talent. Mobility of skilled workers increasingly involves short stays, rather than permanent moves, but this is not yet widely reflected in immigration policy. Removing barriers to short-term stays can increase "brain circulation" and promote diaspora links.

Another problem for many skilled workers is that their qualifications are not always recognised in other countries. Greater harmonisation of standards for qualifications is one way to tackle this problem; some countries also have formal systems to evaluate foreign qualifications and determine their local equivalents. Countries must also provide an open and flexible business environment to ensure that promising innovations can be brought to market. If market access or financial backing are not available, after all, today's global-trotting innovators increasingly have the option of going elsewhere.

The most important point is that the global competition for talent is not a zero-sum game in which some countries win, and others lose. As the Technology Pioneers described here demonstrate, the nature of innovation, and the global movement of talent and ideas, is far more complicated that the simplistic notion of a "talent war" between developed and developing nations would suggest. Innovation is a global activity, and granting the greatest possible freedom to innovators can help to ensure that the ideas they generate will benefit the greatest possible number of people.

computer in 21st century essay

Integrated Transformation: How rising customer expectations are turning com...

Modern customers have it good. Spoilt for choice and convenience, today’s empowered consumers have come to expect more from the businesses they interact with. This doesn’t just apply to their wanting a quality product at a fair price, but also tailored goods, swift and effective customer service across different channels, and a connected experience across their online shopping and in-store experience, with easy access to information they need when they want it. 

Meeting these expectations is a significant challenge for organisations. For many, it requires restructuring long-standing operating models, re-engineering business processes and adopting a fundamental shift in mindset to put customer experience at the heart of business decision- making. Download our report to learn more. 

Enjoy in-depth insights and expert analysis - subscribe to our Perspectives newsletter, delivered every week

  • Privacy Policy
  • Cookie Information
  • Manage Cookies

Newsletter Signup

Salutation - Please Select - Ambassador --> Dr. Frau Lady Lord Madame Minister Monsieur --> Mr. Mrs. Ms. Mx. Sir Your Excellency -->

First Name *

Last Name *

Job Title *

Company / Institution *

Industry * -- Please Select -- Academia & Education Advertising Agriculture, Forestry & Fishing Associations & Charities Chemicals/Mining Communications Construction Financial Services Government, NGO & Local Authorities Healthcare, Pharmaceuticals Information Technology Manufacturing Media Oil & Gas Other Professional Services Recreational Services & Sport Retail Student / Unemployed Trade Unions Transport Travel, Tourism & Hospitality Utilities

Work Email *

Country * -- Please Select --

Please indicate your topic interests here. Economic Development Energy Financial Services Healthcare Infrastructure & Cities Marketing Strategy & Leadership Sustainability Talent & Education Technology & Innovation All

--> I wish to be contacted by email by the Economist Group * -- Please Select -- Yes No

The Economist Group is a global organisation and operates a strict privacy policy around the world. Please see our privacy policy here

Join our Opinion Leaders Panel

Salutation * -- Please Select -- Dr. Mr. Mrs. Ms. Mx.

Occasionally, we would like to keep you informed about our newly-released content, events, our best subscription offers, and other new product offerings from The Economist Group.

I wish to be contacted by email by the Economist Group * -- Please Select -- Yes No

  • Open access
  • Published: 04 December 2018

The computer for the 21st century: present security & privacy challenges

  • Leonardo B. Oliveira 1 ,
  • Fernando Magno Quintão Pereira 2 ,
  • Rafael Misoczki 3 ,
  • Diego F. Aranha 4 ,
  • Fábio Borges 5 ,
  • Michele Nogueira 6 ,
  • Michelle Wangham 7 ,
  • Min Wu 8 &
  • Jie Liu 9  

Journal of Internet Services and Applications volume  9 , Article number:  24 ( 2018 ) Cite this article

34k Accesses

7 Citations

2 Altmetric

Metrics details

Decades went by since Mark Weiser published his influential work on the computer of the 21st century. Over the years, some of the UbiComp features presented in that paper have been gradually adopted by industry players in the technology market. While this technological evolution resulted in many benefits to our society, it has also posed, along the way, countless challenges that we have yet to surpass. In this paper, we address major challenges from areas that most afflict the UbiComp revolution:

Software Protection: weakly typed languages, polyglot software, and networked embedded systems.

Long-term Security: recent advances in cryptanalysis and quantum attacks.

Cryptography Engineering: lightweight cryptosystems and their secure implementation.

Resilience: issues related to service availability and the paramount role of resilience.

Identity Management: requirements to identity management with invisibility.

Privacy Implications: sensitivity data identification and regulation.

Forensics: trustworthy evidence from the synergy of digital and physical world.

We point out directions towards the solutions of those problems and claim that if we get all this right, we will turn the science fiction of UbiComp into science fact.

1 Introduction

In 1991, Mark Weiser described a vision of the Computer for the 21st Century [ 1 ]. Weiser, in his prophetic paper, argued the most far-reaching technologies are those that allow themselves to disappear, vanish into thin air. According to Weiser, this oblivion is a human – not a technological – phenomenon: “Whenever people learn something sufficiently well, they cease to be aware of it,” he claimed. This event is called “tacit dimension” or “compiling” and can be witnessed, for instance, when drivers react to street signs without consciously having to process the letters S-T-O-P [ 1 ].

A quarter of a century later, however, Weiser’s dream is far from becoming true. Over the years, many of his concepts regarding pervasive and ubiquitous computing (UbiComp) [ 2 , 3 ] have been materialized into what today we call Wireless Sensor Networks [ 4 , 5 ], Internet of Things [ 6 , 7 ], Wearables [ 8 , 9 ], and Cyber-Physical Systems [ 10 , 11 ]. The applications of these systems range from traffic accident and CO 2 emission monitoring to autonomous automobile and patient in-home care. Nevertheless, besides all their benefits, the advent of those systems per se have also brought about some drawbacks. And, unless we address them appropriately, the continuity of Weiser’s prophecy will be at stake.

UbiComp poses new drawbacks because, vis-à-vis traditional computing, it exhibits an entirely different outlook [ 12 ]. Computer systems in UbiComp, for instance, feature sensors, CPU, and actuators. Respectively, this means they can hear (or spy on) the user, process her/his data (and, possibly, find out something confidential about her/him), and respond to her/his actions (or, ultimately, expose she/he by revealing some secret). Those capabilities, in turn, make proposals for conventional computers ill-suited in the UbiComp setting and present new challenges.

In the above scenarios, some of the most critical challenges lie in the areas of Security and Privacy [ 13 ]. This is so because the market and users often pursue a system full of features at the expense of proper operation and protection; although, conversely, as computing elements pervade our daily lives, the demand for stronger security schemes becomes greater than ever. Notably, there is a dire need for a secure mechanism able to encompass all aspects and manifestations of UbiComp, across time as well as space, and in a seamless and efficient manner.

In this paper, we discuss contemporary security and privacy issues in the context of UbiComp (Fig.  1 ). We examine multiple research problems still open and point to promising approaches towards their solutions. More precisely, we investigate the following challenges and their ramifications.

figure 1

Current security and privacy issues in UbiComp

Software protection in Section 2 : we study the impact of the adoption of weakly typed languages by resource-constrained devices and discuss mechanisms to mitigate this impact. We go over techniques to validate polyglot software (i.e., software based on multiple programming languages), and revisit promising methods to analyze networked embedded systems.

Long-term security in Section 3 : we examine the security of today’s widely used cryptosystems (e.g., RSA and ECC-based), present some of the latest threats (e.g., the advances in cryptanalysis and quantum attacks), and explore new directions and challenges to guarantee long-term security in the UbiComp setting.

Cryptography engineering in Section 4 : we restate the essential role of cryptography in safeguarding computers, discuss the status quo of lightweight cryptosystems and their secure implementation, and highlight challenges in key management protocols.

Resilience in Section 5 : we highlight issues related to service availability and we reinforce the importance of resilience in the context of UbiComp.

Identity Management in Section 6 : we examine the main requirements to promote identity management (IdM) in UbiComp systems to achieve invisibility, revisit the most used federated IdM protocols, and explore open questions, research opportunities to provide a proper IdM approach for pervasive computing.

Privacy implications in Section 7 : we explain why security is necessary but not sufficient to ensure privacy, go over important privacy-related issues (e.g., sensitivity data identification and regulation), and discuss some tools of the trade to fix those (e.g., privacy-preserving protocols based on homomorphic encryption).

Forensics in Section 8 we present the benefit of the synergistic use of physical and digital evidences to facilitate trustworthy operations of cyber systems.

We believe that only if we tackle these challenges right, we can turn the science fiction of UbiComp into science fact.

Particularly, we choose to address the areas above because they represent promising research directions e cover different aspects of UbiComp security and privacy.

2 Software protection

Modern UbiComp systems are rarely built from scratch. Components developed by different organizations, with different programming models and tools, and under different assumptions are integrated to offer complex capabilities. In this section, we analyze the software ecosystem that emerges from such a world. Figure  2 provides a high-level representation of this ecosystem. In the rest of this section, we shall focus specially on three aspects of this environment, which pose security challenges to developers: the security shortcomings of C and C++, the dominant programming languages among cyber-physical implementations; the interactions between these languages and other programming languages, and the consequences of these interactions on the distributed nature of UbiComp applications. We start by diving deeper into the idiosyncrasies of C and C++.

figure 2

A UbiComp System is formed by modules implemented as a combination of different programming languages. This diversity poses challenges to software security

2.1 Type safety

A great deal of the software used in UbiComp systems is implemented in C or in C++. This fact is natural, given the unparalleled efficiency of these two programming languages. However, if, on the one hand, C and C++ yield efficient executables, on the other hand, their weak type system gives origin to a plethora of software vulnerabilities. In programming language’s argot, we say that a type system is weak when it does not support two key properties: progress and preservation [ 14 ]. The formal definitions of these properties are immaterial for the discussion that follows. It suffices to know that, as a consequence of weak typing, neither C, nor C++, ensure, for instance, bounded memory accesses. Therefore, programs written in these languages can access invalid memory positions. As an illustration of the dangers incurred by this possibility, it suffices to know that out-of-bounds access are the principle behind buffer overflow exploits.

The software security community has been developing different techniques to deal with the intrinsic vulnerabilities of C/C++/assembly software. Such techniques can be fully static, fully dynamic or a hybrid of both approaches. Static protection mechanisms are implemented at the compiler level; dynamic mechanisms are implemented at the runtime level. In the rest of this section, we list the most well-known elements in each category.

Static analyses provide a conservative estimate of the program behavior, without requiring the execution of such a program. This broad family of techniques includes, for instance, abstract interpretation [ 15 ], model checking [ 16 ] and guided proofs [ 17 ]. The main advantage of static analyses is the low runtime overhead, and its soundness: inferred properties are guaranteed to always hold true. However, static analyses have also disadvantages. In particular, most of the interesting properties of programs lay on undecidable land [ 18 ]. Furthermore, the verification of many formal properties, even though a decidable problem, incur a prohibitive computational cost [ 19 ].

Dynamic analyses come in several flavors: testing (KLEE [ 20 ]), profiling (Aprof [ 21 ], Gprof [ 22 ]), symbolic execution (DART [ 23 ]), emulation (Valgrind [ 24 ]), and binary instrumentation (Pin [ 25 ]). The virtues and limitations of dynamic analyses are exactly the opposite of those found in static techniques. Dynamic analyses usually do not raise false alarms: bugs are described by examples, which normally lead to consistent reproduction [ 26 ]. However, they are not required to always find security vulnerabilities in software. Furthermore, the runtime overhead of dynamic analyses still makes it prohibitive to deploy them into production software [ 27 ].

As a middle point, several research groups have proposed ways to combine static and dynamic analyses, producing different kinds of hybrid approaches to secure low-level code. This combination might yield security guarantees that are strictly more powerful than what could be obtained by either the static or the dynamic approaches, when used separately [ 28 ]. Nevertheless, negative results still hold: if an attacker can take control of the program, usually he or she can circumvent state-of-the-art hybrid protection mechanisms, such as control flow integrity [ 29 ]. This fact is, ultimately, a consequence of the weak type system adopted by languages normally seen in the implementation of UbiComp systems. Therefore, the design and deployment of techniques that can guard such programming languages, without compromising their efficiency to the point where they will no longer be adequate to UbiComp development, remains an open problem.

In spite of the difficulties of bringing formal methods to play a larger role in the design and implementation of programming languages, much has already been accomplished in this field. Testimony to this statement is the fact that today researchers are able to ensure the safety of entire operating system kernels, as demonstrated by Gerwin et al. [ 30 ], and to ensure that compilers meet the semantics of the languages that they process [ 31 ]. Nevertheless, it is reasonable to think that certain safety measures might come at the cost of performance and therefore we foresee that much of the effort of the research community in the coming years will be dedicated to making formal methods not only more powerful and expressive, but also more efficient to be used in practice.

2.2 Polyglot programming

Polyglot programming is the art and discipline of writing source code that involves two or more programming languages. It is common among implementations of cyber-physical systems. As an example, Ginga, the Brazilian protocol for digital TV, is mostly implemented in Lua and C [ 32 ]. Figure  3 shows an example of communication between a C and a Lua program. Other examples of interactions between programming languages include bindings between C and Python [ 33 ], C and Elixir [ 34 ] and the Java Native Interface [ 35 ]. Polyglot programming complicates the protection of systems. Difficulties arise due to a lack of multi-language tools and due to unchecked memory bindings between C/C++ and other languages.

figure 3

Two-way communication between a C and a Lua program

An obstacle to the validation of polyglot software is the lack of tools that analyze source code written in different programming languages, under a unified framework. Returning to Fig.  3 , we have a system formed by two programs, written in different programming languages. Any tool that analyzes this system as a whole must be able to parse these two distinct syntaxes and infer the connection points between them. Work has been performed towards this end, but solutions are still very preliminary. As an example, Maas et al. [ 33 ] have implemented automatic ways to check if C arrays are correctly read by Python programs. As another example, Furr and Foster [ 36 ] have described techniques to ensure type-safety of OCaml-to-C and Java-to-C bindings.

A promising direction to analyze polyglot systems is based on the idea of compilation of source code partially available. This feat consists in the reconstruction of the missing syntax and the missing declarations necessary to produce a minimal version of the original program that can be analyzed by typical tools. The analysis of code partially available makes it possible to test parts of a polyglot program in separate, in a way to produce a cohesive view of the entire system. This technique has been demonstrated to yield analyzable Java source code [ 37 ], and compilable C code [ 38 ]. Notice that this type of reconstruction is not restricted to high-level programming languages. Testimony of this fact is the notion of micro execution , introduced by Patrice Godefroid [ 39 ]. Godefroid’s tool allows the testing of x86 binaries, even when object files are missing. Nevertheless, in spite of these developments, the reconstruction is still restricted to the static semantics of programs. The synthesis of behavior is a thriving discipline in computer science [ 40 ], but still far away from enabling the certification of polyglot systems.

2.3 Distributed programming

Ubiquitous computing systems tend to be distributed. It is even difficult to conceive any use for an application in this world that does not interact with other programs. And it is common knowledge that distributed programming opens up several doors to malicious users. Therefore, to make cyber-physical technology safer, security tools must be aware of the distributed nature of such systems. Yet, two main challenges stand in front of this requirement: the difficulty to build a holistic view of the distributed application, and the lack of semantic information bound to messages exchanged between processes that communicate through a network.

To be accurate, the analysis of a distributed system needs to account for the interactions between the several program parts that constitute this system [ 41 ]. Discovering such interactions is difficult, even if we restrict ourselves to code written in a single programming language. Difficulties stem from a lack of semantic information associated with operations that send and receive messages. In other words, such operations are defined as part of a library, not as part of the programming language itself. Notwithstanding this fact, there are several techniques that infer communication channels between different pieces of source code. As examples, we have the algorithms of Greg Bronevetsky [ 42 ], and Teixeira et al. [ 43 ], which build a distributed view of a program’s control flow graph (CFG). Classic static analyses work without further modification on this distributed CFG. However, the distributed CFG is still a conservative approximation of the program behavior. Thus, it forces already imprecise static analyses to deal with communication channels that might never exist during the execution of the program. The rising popularization of actor-based libraries, like those available in languages such as Elixir [ 34 ] and Scala [ 44 ] is likely to mitigate the channel-inference problem. In the actor model channels are explicit in the messages exchanged between the different processing elements that constitute a distributed system. Nevertheless, if such model will be widely adopted by the IoT community is still a fact to be seen.

Tools that perform automatic analyses in programs rely on static information to produce more precise results. In this sense, types are core for the understanding of software. For instance, in Java and other object-oriented languages, the type of objects determines how information flows along the program code. However, despite this importance, messages exchanged in the vast majority of distributed systems are not typed. Reason for this is the fact that such messages, at least in C, C++ and assembly software, are arrays of bytes. There have been two major efforts to mitigate this problem: the addition of messages as first class values to programming languages, and the implementation of points-to analyses able to deal with pointer arithmetics in languages that lack such feature. Concerning the first front, several programming languages, such as Scala, Erlang and Elixir, incorporate messages as basic constructs, providing developers with very expressive ways to implement the actor model [ 45 ] – a core foundation of distributed programming. Even though the construction of programming abstractions around the actor model is not a new idea [ 45 ], their raising popularity seems to be a phenomenon of the 2000’s, boosted by increasingly more expressive abstractions [ 46 ] and increasingly more efficient implementations [ 47 ]. In the second front, researchers have devised analyses that infer the contents [ 48 ] and the size of arrays [ 49 ] in weakly-typed programming languages. More importantly, recent years have seen a new flurry of algorithms designed to analyze C/C++ style pointer arithmetics [ 50 – 53 ]. The wide adoption of higher-level programming languages coupled with the construction of new tools to analyze lower-level languages is exciting. This trend seems to indicate that the programming languages community is dedicating each time more attention to the task of implementing safer distributed software. Therefore, even though the design of tools able to analyze the very fabric of UbiComp still poses several challenges to researchers, we can look to the future with optimism.

3 Long-term security

Various UbiComp systems are designed to withstand a lifespan of many years, even decades [ 54 , 55 ]. Systems in the context of critical infrastructure, for example, often require an enormous financial investment to be designed and deployed in the field [ 56 ], and therefore would offer a better return on investment if they remain in use for a longer period of time. The automotive area is a field of particular interest. Vehicles are expected to be reliable for decades [ 57 ], and renewing vehicle fleets or updating features ( recalls ) increase costs for their owners. Note that modern vehicles are part of the UbiComp ecosystem as they are equipped with embedded devices with Internet connectivity. In the future, it is expected that vehicles will depend even more on data collected and shared across other vehicles/infrastructure through wireless technologies [ 58 ] in order to enable enriched driving experiences such as autonomous driving [ 59 ].

It is also worth mentioning that systems designed to endure a lifespan of several years or decades might suffer from lack of future maintenance. The competition among players able to innovate is very aggressive leading to a high rate of companies going out of business within a few years [ 60 ]. A world inundate by devices without proper maintenance will offer serious future challenges [ 61 ].

From the few aforementioned examples, it is already evident that there is an increasing need for UbiComp systems to be reliable for a longer period of time and, whenever possible, requiring as few updates as possible. These requirements have a direct impact on the security features of such systems: comparatively speaking, they would offer fewer opportunities for patching eventual security breaches than conventional systems. This is a critical situation given the intense and dynamic progresses on devising and exploiting new security breaches. Therefore, it is of utmost importance to understand what the scientific challenges are to ensure long-term security from the early stage of the design of an UbiComp system, instead of resorting to palliative measures a posteriori.

3.1 Cryptography as the core component

Ensuring long-term security is a quite challenging task for any system, not only for UbiComp systems. At a minimum, it requires that every single security component is future-proof by itself and also when connected to other components. To simplify this excessively large attack surface and still be able to provide helpful recommendations, we will focus our attention on the main ingredient of most security mechanisms, as highlighted in Section 4 , i.e. Cryptography.

There are numerous types of cryptographic techniques. The most traditional ones rely on the hardness of computational problems such as integer factorization [ 62 ] and discrete logarithm problems [ 63 , 64 ]. These problems are believed to be intractable by current cryptanalysis techniques and the available technological resources. Because of that, cryptographers were able to build secure instantiation of cryptosystems based on such computational problems. For various reasons (to be discussed in the following sections), however, the future-proof condition of such schemes is at stake.

3.2 Advancements in classical cryptanalysis

The first threat for the future-proof condition of any cryptosystem refers to potential advancements on cryptanalysis, i.e., on techniques aiming at solving the underlying security problem in a more efficient way (with less processing time, memory, etc.) than originally predicted. Widely-deployed schemes have a long track of academic and industrial scrutiny and therefore one would expect little or no progress on the cryptanalysis techniques targeting such schemes. Yet, the literature has recently shown some interesting and unexpected results that may suggest the opposite.

In [ 65 ], for example, Barbulescu et al. introduced a new quasi-polynomial algorithm to solve the discrete logarithm problem in finite fields of small characteristics. The discrete logarithm problem is the underlying security problem of the Diffie-Hellman Key Exchange [ 66 ], the Digital Signature Algorithm [ 67 ] and their elliptic curve variants (ECDH [ 68 ] and ECDSA [ 67 ], respectively), just to mention a few widely-deployed cryptosystems. This cryptanalytic result is restricted to finite fields of small characteristics, something that represents an important limitation to attack real-world implementations of the aforementioned schemes. However, any sub-exponential algorithm that solves a longstanding problem should be seen as a relevant indication that the cryptanalysis literature might still be subject to eventual breakthroughs.

This situation should be considered by architects designing UbiComp systems that have long-term security as a requirement. Implementations that support various (i.e. higher than usual) security levels are preferred when compared to fixed, single key size support. The same approach used for keys should be used to other quantities in the scheme that somehow impact on its overall security. In this way, UbiComp systems would be able to consciously accommodate future cryptanalytic advancements or, at the very least, reduce the costs for security upgrades.

3.3 Future disruption due to quantum attacks

Quantum computers are expected to offer dramatic speedups to solve certain computational problems, as foreseen by Daniel R. Simon in his seminal paper on quantum algorithms [ 69 ]. Some of these speedups may enable significant advancements to technologies currently limited by its algorithmic inefficiency [ 70 ]. On the other hand, to our misfortune, some of the affected computational problems are the ones currently being used to secure widely-deployed cryptosystems.

As an example, Lov K. Grover introduced a quantum algorithm [ 71 ] able to find an element in the domain of a function (of size N ) which leads, with high probability, to a desired output in only \(O(\sqrt {N})\) steps. This algorithm can be used to speed up the cryptanalysis of symmetric cryptography. Block ciphers of n bits keys, for example, would offer only n /2 bits of security against a quantum adversary. Hash functions would be affected in ways that depend on the expected security property. In more details, hash functions of n bits digests would offer only n /3 bits of security against collision attacks and n /2 bits of security against pre-image attacks. Table  1 summarizes this assessment. In this context, AES-128 and SHA-256 (collision-resistance) would not meet the minimum acceptable security level of 128-bits (of quantum security). Note that both block ciphers and hash function constructions will still remain secure if longer keys and digest sizes are employed. However, this would lead to important performance challenges. AES-256, for example, is about 40% less efficient than AES-128 (due to the 14 rounds, instead of 10).

Even more critical than the scenario for symmetric cryptography, quantum computers will offer an exponential speedup to attack most of the widely-deployed public-key cryptosystems. This is due to Peter Shor’s algorithm [ 72 ] which can efficiently factor large integers and compute the discrete logarithm of an element in large groups in polynomial time. The impact of this work will be devastating to RSA and ECC-based schemes as increasing the key sizes would not suffice: they will need to be completely replaced.

In the field of quantum resistant public-key cryptosystems, i.e. alternative public key schemes that can withstand quantum attacks, several challenges need to be addressed. The first one refers to establishing a consensus in both academia and industry on how to defeat quantum attacks. In particular, there are two main techniques considered as capable to withstand quantum attacks, namely: post-quantum cryptography (PQC) and quantum cryptography (QC). The former is based on different computational problems believed to be so hard that not even quantum computers would be able to tackle them. One important benefit of PQC schemes is that they can be implemented and deployed in the computers currently available [ 73 – 77 ]. The latter (QC) depends on the existence and deployment of a quantum infrastructure, and is restricted to key-exchange purposes [ 78 ]. The limited capabilities and the very high costs for deploying quantum infrastructure should eventually lead to a consensus towards the post-quantum cryptography trend.

There are several PQC schemes available in the literature. Hash-Based Signatures (HBS), for example, are the most accredited solutions for digital signatures. The most modern constructions [ 76 , 77 ] represent improvements of the Merkle signature scheme [ 74 ]. One important benefit of HBS is that their security relies solely on certain well-known properties of hash functions (thus they are secure against quantum attacks, assuming appropriate digest sizes are used). Regarding other security features, such as key exchange and asymmetric encryption, the academic and industry communities have not reached a consensus yet, although both code-based and lattice-based cryptography literatures have already presented promising schemes [ 79 – 85 ]. Isogeny-based cryptography [ 86 ] is a much more recent approach that enjoys certain practical benefits (such as fairly small public key sizes [ 87 , 88 ]) although it has just started to benefit from a more comprehensive understanding of its cryptanalysis properties [ 89 ]. Regarding standardization efforts, NIST has recently started a Standardization Process on Post-Quantum Cryptography schemes [ 90 ] which should take at least a few more years to be concluded. The current absence of standards represents an important challenge. In particular, future interoperability problems might arise.

Finally, another challenge in the context of post-quantum public-key cryptosystems refers to potentially new implementation requirements or constraints. As mentioned before, hash-based signatures are very promising post-quantum candidates (given efficiency and security related to hash functions) but also lead to a new set of implementation challenges, such as the task of keeping the scheme state secure. In more details, most HBS schemes have private-keys (their state ) that evolve along the time. If rigid state management policies are not in place, a signer can re-utilize the same private-key twice, something that would void the security guarantees offered by the scheme. Recently, initial works to address these new implementation challenges have appeared in the literature [ 91 ]. A recently introduced HBS construction [ 92 ] showed how to get rid of the state management issue at the price of much larger signatures. These examples indicate potentially new implementation challenges for PQC schemes that must be addressed by UbiComp systems architects.

4 Cryptographic engineering

UbiComp systems involve building blocks of very different natures: hardware components such as sensors and actuators, embedded software implementing communication protocols and interface with cloud providers, and ultimately operational procedures and other human factors. As a result, pervasive systems have a large attack surface that must be protected using a combination of techniques.

Cryptography is a fundamental part of any modern computing system, but unlikely to be the weakest component in its attack surface. Networking protocols, input parsing routines and even interface code with cryptographic mechanisms are components much more likely to be vulnerable to exploitation. However, a successful attack on cryptographic security properties is usually disastrous due to the risk concentrated in cryptographic primitives. For example, violations of confidentiality may cause massive data breaches involving sensitive information. Adversarial interference on communication integrity may allow command injection attacks that deviate from the specified behavior. Availability is crucial to keep the system accessible by legitimate users and to guarantee continuous service provisioning, thus cryptographic mechanisms must also be lightweight to minimize potential for abuse by attackers.

Physical access by adversaries to portions of the attack surface is a particularly challenging aspect of deploying cryptography in UbiComp systems. By assumption, adversaries can recover long-term secrets and credentials that provide some control over a (hopefully small) portion of the system. Below we will explore some of the main challenges in deploying cryptographic mechanisms for pervasive systems, including how to manage keys and realize efficient and secure implementation of cryptography.

4.1 Key management

UbiComp systems are by definition heterogeneous platforms, connecting devices of massively different computation and storage power. Designing a cryptographic architecture for any heterogeneous system requires assigning clearly defined roles and corresponding security properties for the tasks under responsibility of each entity in the system. Resource-constrained devices should receive less computationally intensive tasks, and their lack of tamper-resistance protections indicate that long-term secrets should not reside in these devices. More critical tasks involving expensive public-key cryptography should be delegated to more powerful nodes. A careful trade-off between security properties, functionality and cryptographic primitives must then be addressed per device or class of devices [ 93 ], following a set of guidelines for pervasive systems:

Functionality: key management protocols must manage lifetime of cryptographic keys and ensure accessibility to the currently authorized users, but handling key management and authorization separately may increase complexity and vulnerabilities. A promising way of combining the two services into a cryptographically-enforced access control framework is attribute-based encryption [ 94 , 95 ], where keys have sets of capabilities and attributes that can be authorized and revoked on demand.

Communication: components should minimize the amount of communication, at risk of being unable to operate if communication is disrupted. Non-interactive approaches for key distribution [ 96 ] are recommended here, but advanced protocols based on bilinear pairings should be avoided due to recent advances on solving the discrete log problem (in the so called medium prime case [ 97 ]). These advances forcedly increase the parameter sizes, reduce performance/scalability and may be improved further, favoring more traditional forms of asymmetric cryptography.

Efficiency: protocols should be lightweight and easy to implement, mandating that traditional public key infrastructures (PKIs) and expensive certificate handling operations are restricted to the more powerful and connected nodes in the architecture. Alternative models supporting implicit certification include identity-based [ 98 ] (IBC) and certificate-less cryptography [ 99 ] (CLPKC), the former implying inherent key escrow. The difficulties with key revocation still impose obstacles for their wide adoption, despite progress [ 100 ]. A lightweight pairing and escrow-less authenticated key agreement based on an efficient key exchange protocol and implicit certificates combines the advantages of the two approaches, providing high performance while saving bandwidth [ 101 ].

Interoperability: pervasive systems are composed of components originating from different manufacturers. Supporting a cross-domain authentication and authorization framework is crucial for interoperability [ 102 ].

Cryptographic primitives involved in joint functionality must then be compatible with all endpoints and respect the constraints of the less powerful devices.

4.2 Lightweight cryptography

The emergence of huge collections of interconnected devices in UbiComp motivate the development of novel cryptographic primitives, under the moniker lightweight cryptography . The term lightweight does not imply weaker cryptography, but application-tailored cryptography that is especially designed to be efficient in terms of resource consumption such as processor cycles, energy and memory footprint [ 103 ]. Lightweight designs aim to target common security requirements for cryptography but may adopt less conservative choices or more recent building blocks.

As a first example, many new block ciphers were proposed as lightweight alternatives to the Advanced Encryption Standard (AES) [ 104 ]. Important constructions are LS-Designs [ 105 ], modern ARX and Feistel networks [ 106 ], and substitution-permutation networks [ 107 , 108 ]. A notable candidate is the PRESENT block cipher, with a 10-year maturity of resisting cryptanalytic attempts [ 109 ], and whose performance recently became competitive in software [ 110 ].

In the case of hash functions, a design may even trade-off advanced security properties (such as collision resistance) for simplicity in some scenarios. A clear case is the construction of short Message Authentication Codes (MAC) from non-collision resistant hash functions, such as in SipHash [ 111 ], or digital signatures from short-input hash functions [ 112 ]. In conventional applications, BLAKE2 [ 113 ] is a stronger drop-in replacement to recently cryptanalyzed standards [ 114 ] and faster in software than the recently published SHA-3 standard [ 115 ].

Another trend is to provide confidentiality and authentication in a single step, through Authenticated Encryption with Associated Data (AEAD). This can be implemented with a block cipher operation mode (like GCM [ 116 ]) or a dedicated design. The CAESAR competition Footnote 1 selected new AEAD algorithms for standardization across multiple use cases, such as lightweight and high-performance applications and a defense-in-depth setting. NIST has followed through and started its own standardization process for lightweight AEAD algorithms and hash functions Footnote 2 .

In terms of public-key cryptography, Elliptic Curve Cryptography (ECC) [ 63 , 117 ] continues to be the main contender in the space against factoring-based cryptosystems [ 62 ], due to an underlying problem conjectured to be fully exponential in classical computers. Modern instantiations of ECC enjoy high performance and implementation simplicity and are very suited for embedded systems [ 118 – 120 ]. The dominance of number-theoretic primitives is however threatened by quantum computers as described in Section 3 .

The plethora of new primitives must be rigorously evaluated from both the security and performance point of views, involving both theoretical work and engineering aspects. Implementations are expected to consume smaller amounts of energy [ 121 ], cycles and memory [ 122 ] in ever decreasing devices and under more invasive attacks.

4.3 Side-channel resistance

If implemented without care, an otherwise secure cryptographic algorithm or protocol can leak critical information which may be useful to an attacker. Side-channel attacks [ 123 ] are a significant threat against cryptography and may use timing information, cache latency, power and electromagnetic emanations to recover secret material. These attacks emerge from the interaction between the implementation and underlying computer architecture and represent an intrinsic security problem to pervasive computing environments, since the attacker is assumed to have physical access to at least some of the legitimate devices.

Protecting against intrusive side-channel attacks is a challenging research problem, and countermeasures typically promote some degree of regularity in computation. Isochronous or constant time implementations were among the first strategies to tackle this problem in the case of variances in execution time or latency in the memory hierarchy. The application of formal methods has enabled the first tools to verify isochronicity of implementations, such as information flow analysis [ 124 ] and program transformations [ 125 ].

While there is a recent trend towards constructing and standardizing cryptographic algorithms with some embedded resistance against the simpler timing and power analysis attacks [ 105 ], more powerful attacks such as differential power analysis [ 126 ] or fault attacks [ 127 ] are very hard to prevent or mitigate. Fault injection became a much more powerful attack methodology it was after demonstrated in software [ 128 ].

Masking techniques [ 129 ] are frequently investigated as a countermeasure to decorrelate leaked information from secret data, but frequently require robust entropy sources to achieve their goal. Randomness recycling techniques have been useful as a heuristic, but formal security analysis of such approaches is an open problem [ 130 ]. Modifications in the underlying architecture in terms of instruction set extensions, simplified execution environments and transactional mechanisms for restarting faulty computation are another promising research direction but may involve radical and possibly cost-prohibitive changes to current hardware.

5 Resilience

UbiComp relies on essential services as connectivity, routing and end-to-end communication. Advances in those essential services make possible the envisioned Weiser’s pervasive applications, which can count on transparent communication while reaching the expectations and requirements of final users in their daily activities. Among user’s expectations and requirements, the availability of services – not only communication services, but all services provided to users by UbiComp – is a paramount. Users more and more expect, and pay, for 24/7 available services. This is even more relevant when we think about critical UbiComp systems, such as those related to healthcare, urgency, and vehicular embedded systems.

Resilience is highlighted in this article, because it is one of the pillars of security. Resilience aims at identifying, preventing, detecting and responding to process or technological failures to recover or mitigate damages and financial losses resulted from service unavailability [ 131 ]. In general, service unavailability has been associated with non-intentional failures, however, more and more the intentional exploitation of service availability breaches is becoming disruptive and out of control, as seen in the latest Distributed Denial of Service (DDoS) attack against the company DYN, a leading DNS provider, and the DDoS attack against the company OVH, the French website hosting giant [ 132 , 133 ]. The latter reached an intense volume of malicious traffic of approximately 1 TB/s, generated from a large amount of geographically distributed and infected devices, such as printers, IP cameras, residential gateways and baby monitors. Those devices are directly related to the modern concept of UbiComp systems [ 134 ] and they intend to provide ubiquitous services to users.

However, what attracts the most the attention here is the negative side effect of the ubiquity exploitation against service availability. It is fact today that the Mark Weiser’s idea of Computer for the 21st Century has open doors to new kind of highly disruptive attacks. Those attacks are in general based on the idea of invisibility and unawareness for the devices in our homes, works, cities, and countries. But, exactly because of this, people seems to not pay enough attention to basic practices, such as change default passwords in Internet connect devices as CCTV cameras, baby monitors, smart TVs and other. This simple fact has been pointed as the main cause of the two DDoS attacks mentioned before and a report by global professional services company Deloitte suggests that Distributed Denial of Service (DDoS) attacks, that compromise exactly service availability, increased in size and scale in 2017, thanks in part to the growing multiverse of connected things Footnote 3 . They also mentioned that DDoS attacks will be more frequent, with an estimated 10 million attacks in few months.

As there is no guarantee to completely avoid these attacks, resilient solutions become a way to mitigate damages and quickly resume the availability of services. Resilience is then necessary and complementary to the other solutions we observe in the previous sections of this article. Hence, this section focuses on highlighting the importance of resilience in the context of UbiComp systems. We overview the state-of-the-art regarding to resilience in the UbiComp systems and point out future directions for research and innovation [ 135 – 138 ]. We also understand that resilience in these systems still requires a lot of investigations, however we believe that it was our role to raise this point to discussion through this article.

In order to contextualize resilience in the scope of UbiComp, it is important to observe that improvements on information and communication technologies, such as wireless networking, have increased the use of distributed systems in our everyday lives. Network access is becoming ubiquitous through portable devices and wireless communications, making people more and more dependent on them. This raising dependence claims for simultaneous high level of reliability and availability. The current networks are composed of heterogeneous portable devices, communicating among themselves generally in a wireless multi-hop manner [ 139 ]. These wireless networks can autonomously adapt to changes in their environment such as device position, traffic pattern and interference. Each device can dynamically reconfigure its topology, coverage and channel allocation in accordance with changes.

UbiComp poses nontrivial challenges to resilience design due to the characteristics of the current networks, such as shared wireless medium, highly dynamic network topology, multi-hop communication and low physical protection of portable devices [ 140 , 141 ]. Moreover, the absence of central entities in different scenarios increases the complexity of resilience management, particularly, when it is associated with access control, node authentication and cryptographic key distribution.

Network characteristics, as well as constraints on other kind of solutions against attacks that disrupt service availability, reinforce the fact that no network is totally immune to attacks and intrusions. Therefore, new approaches are required to promote the availability of network services. Such requirements motivate the design of resilient network services. In this work, we focus on the delivery of data from one UbiComp device to another as a fundamental network functionality and we emphasize three essential services: physical and link-layer connectivity, routing and end-to-end logical communication. However, resilience has also been observed under other perspectives. We follow the claim that resilience is achieved upon a cross-layer security solution that integrates preventive (i.e., cryptography and access control), reactive (i.e., intrusion detection systems) and tolerant (i.e., packet redundancy) defense lines in a self-adaptive and coordinated way [ 131 , 142 ].

However, what are still the open challenges to achieve resilience in the UbiComp context? First of all, we emphasize the heterogeneity of devices and technologies that compose UbiComp environments. The integration from large-scale systems, such as Cloud data centers, to tiny devices, such as wearable and implantable sensors, is a huge challenge itself due to the complexity resulted from it. Then, in addition, providing integration of preventive, reactive and tolerant solutions and their adaptation is even harder in face of the different requirements of these devices, their capabilities in terms of memory and processing, and application requirements. Further, dealing with heterogeneity in terms of communication technology and protocols makes challenging the analysis of network behavior and topologies, what in conventional systems are employed to assist in the design of resilient solutions.

Another challenge is how to deal with scale. First, the UbiComp systems tend to be hyper-scale and geographically distributed. How to cope, then, with the complexity resulted from that? How to define and construct models to understand these systems and offer resilient services? Finally, we also point out as challenges the uncertainty and speed. If on the one hand, it is so hard to model, analyze and define resilient services in this complex system, on the other hand uncertainly is a norm on them, being speed and low response time a strong requirement for the applications in these systems. Hence, how to address all these elements together? How to manage them in order to offer resilient services considering diverse kind of requirements from the various applications?

All these questions lead to deep investigation and challenges. However, they also show opportunities for applied research in designing and engineering resilient systems, mainly for the UbiComp context. Particularly, if we advocate for designing resilient systems that manage the three defense lines in an adaptive way. We believe that this management can promote a great advance for applied research and for resilience.

6 Identity management

Authentication and Authorization Infrastructure (AAI) is the central element for providing security in distributed applications. AAI is a way to fulfill the security requirements in UbiComp systems. It is possible to provide identity management with this infrastructure to prevent legitimate or illegitimate users/devices to access non-authorized resources. IdM can be defined as a set of processes, technologies and policies used for assurance of identity information (e.g., identifiers, credentials, attributes), assurance of the identity of an entity (e.g., users, devices, systems), and enabling businesses and security applications [ 143 ]. Thus, IdM allows these identities to be used for authentication, authorization and auditing mechanisms [ 144 ]. A proper identity management approach is necessary for pervasive computing to be invisible to users [ 145 ]. Figure  4 provides an overview of the topics discussed in this section.

figure 4

Pervasive IdM Challenges

According to [ 143 ], electronic identity (eID) comprises a set of data about an entity that is sufficient to identify that entity in a particular digital context. An eID may be comprised of:

Identifier - a series of digits, characters and symbols or any other form of data used to uniquely identify an entity (e.g., UserID, e-mail addresses, URI and IP addresses). IoT requires a global unique identifier for each entity in the network;

Credentials - an identifiable object that can be used to authenticate the claimant (e.g., digital certificates, keys, tokens and biometrics);

Attributes - descriptive information bound to an entity that specifies its characteristics.

In UbiComp systems, identity has both a digital and a physical component. Some entities might have only an online or physical representation, whereas others might have a presence in both planes. IdM requires relationships not only between entities in the same planes but also across them [ 145 ].

6.1 Identity management system

An IdM system deals with the lifecycle of an identity, which consists of registration, storage, retrieval, provisioning and revocation of identity attributes [ 146 ]. Note that the management of devices’ identify lifecycle is more complicated than people’s identity lifecycle due to the complexity of operational phases of a device (i.e., from the manufacturing to the removed and re-commissioned) in the context of a given application or use case [ 102 , 147 ].

For example, consider a given device life-cycle. In the pre-deployment, some cryptographic material is loaded into the device during its manufacturing process. Next, the owner of the device purchases it and gets a PIN that grants the owner the initial access to the device. The device is later installed and commissioned within a network by an installer during the bootstrapping phase. The device identity and the secret keys used during normal operation are provided to the device during this phase. After being bootstrapped, the device is in operational mode. During this operational phase, the device will need to prove its identity (D2D communication) and to control the access to its resources/data. For devices with lifetimes spanning several years, maintenance cycles should be required. During each maintenance phase, the software on the device can be upgraded, or applications (running on the device) can be reconfigured. The device continues to loop through the operational phase until the device is decommissioned at the end of its lifecycle. Furthermore, the device can also be removed and re-commissioned to be used in a different system under a different owner thereby starting the lifecycle all over again. During this phase, the cryptographic material held by the device is wiped, and the owner is unbound from the device [ 147 ].

An IdM system involves two main entities: identity provider (IdP - responsible for authentication and user/device information management in a domain) and service provider (SP - also known as relying party, which provides services to user/device based on their attributes). The arrangement of these entities in an IdM system and the way in which they interact with each other characterize the IdM models, which can be traditional (isolated or silo), centralized, federated or user-centric [ 146 ].

In traditional model, IdP and SP are grouped into a single entity whose role is to authenticate and control access to their users or devices without relying on any other entity. In this model, the providers do not have any mechanisms to share this identity information with other organizations/entities. This makes the identity provisioning cumbersome for the end user or device, since the users and devices need to proliferate their sensitive data to different providers [ 146 , 148 ].

The centralized model emerged as a possible solution to avoid the redundancies and inconsistencies in the traditional model and to give the user/device a seamless experience. Here, a central IdP became responsible for collecting and provisioning the user’s or device’s identity information in a manner that enforced the preferences of the user/device. The centralized model allows the sharing of identities among SPs and provides Single Sign-On (SSO). This model has several drawbacks as the IdP not only becomes a single point of failure but also may not be trusted by all users, devices and service providers [ 146 ]. In addition, a centralized IdP must provide different mechanisms to authenticate either users or autonomous devices to be adequate with UbiComp system requirements [ 149 ].

UbiComp systems are composed of heterogeneous devices that need to prove their authenticity to the entities they communicate with. One of the problems in this scenario is the possibility of devices being located in different security domains using distinct authentication mechanisms. An approach for providing IdM in a scenario with multiple security domains is through an AAI that uses the federated IdM model (FIM) [ 150 , 151 ]. In a federation, trust relationships are established among IdPs and SPs to enable the exchange of identity information and service sharing. Existing trust relationships guarantee that users/devices authenticated in home IdP may access protected resources provided by SPs from other federation security domains [ 148 ]. Single Sign-On (SSO) is obtained when the same authentication event can be used to access different federated services [ 146 ].

Considering the user authentication perspective, the negative points of the centralized and federated models focus primarily on the IdP, as it has full control over the user’s data [ 148 ]. Besides, the user depends on an online IdP to provide the required credentials. In the federated model, users cannot guarantee that their information will not be disclosed to third parties without the users’ consent [ 146 ].

The user-centric model provides the user full control over transactions involving his or her identity data [ 148 ]. In the user-centric model, the user identity can be stored on a Personal Authentication Device, such as, a smartphone or a smartcard. Users have the freedom to choose the IdPs which will be used and to control the personal information disclosed to SPs. In this model, the IdPs continue acting as a trusted third party between users and SPs. However, IdPs act according to the user’s preferences [ 152 ]. The major drawback of the user-centric model is that it is not able to handle delegations. Several solutions that adopted this model combine it with FIM or centralized model, however, novel solutions prefer federated model.

6.1.1 Authentication

User and device authentication within an integrated authentication infrastructure (IdP is responsible for user and device authentication) might use a centralized IdM model [ 149 , 153 ] or a traditional model [ 154 ]. Other works [ 155 – 157 ] proposed AAIs for IoT using the federated model, however, only for user authentication and not for device authentication. Kim et al. [ 158 ] proposes a centralized solution that enables the use of different authentication mechanisms for devices that are chosen based on device energy autonomy. However, user authentication is not provided.

Based on the traditional model, an AAI composed by a suite of protocols that incorporate authentication and access control during the entire IoT device lifecycle is proposed in [ 102 ]. Domenech et al. [ 151 ] proposes an AAI for the Web of Things, which is based on the federated IdM model (FIM) and enables SSO for users and devices. In this AAI, IdPs may be implemented as a service in a Cloud (IdPaaS - Identity Provider as a Service) or on premise. Some IoT platforms provide IdPaaS to user and device authentication such as Amazon Web Services (AWS) IoT, Microsoft Azure IoT, Google Cloud IoT platform.

Authentication mechanisms and protocols consume computational resources. Thus, to integrate an AAI into a resource constrained embedded device can be a challenge. As mentioned in Section 4.2 , a set of lightweight cryptographic algorithms, which do not impose certificate-related overheads on devices, can be used to provide device authentication in UbiComp systems. There is a recent trend that investigates the benefits of using identity-based (IBC) cryptography to provide cross-domain authentication for constrained devices [ 102 , 151 , 159 ]. However, some IoT platforms still provide certificate-based device authentication such as Azure IoT, WSO2 or per-device public/private key authentication (RSA and Elliptic Curve algorithms) using JSON Web Tokens such as Google Cloud IoT Platform and WSO2.

Identity theft is the fastest growing crime in recent years. Currently, password-based credentials are the most used by user authentication mechanisms, despite of their weaknesses [ 160 ]. There are multiple opportunities for impersonation and other attacks that fraudulently claim another subject’s identity [ 161 ]. Multi-factor authentication (MFA) is a solution created to improve the authentication process robustness and it generally combines two or more authentication factors ( something you know , something you have , and something you are ) for successful authentication [ 161 ]. In this type of authentication, an attacker needs to compromise two or more factors which makes the task more complex. Several IdPs and SPs already offer MFA to authenticate its users, however, device authentication is still an open question.

6.1.2 Authorization

In UbiComp system, a security domain can have client devices and SPs devices (SP embedded). In this context, physical devices and online providers can offer services. Devices join and leave, SPs appear and disappear, and access control must adapt itself to maintain the user perception of being continuously and automatically authenticated [ 145 ]. The data access control provided by AAI embedded in the device is also a significant requirement. Since these devices are cyber-physical systems (CPS), a security threat against these can likely impact the physical world. Thus, if a device is improperly accessed, there is a chance that this violation will affect the physical world risking people’s well-being and even their lives [ 151 ].

Physical access control systems (PACS) provide access control to physical resources, such as buildings, offices or any other protected areas. Current commercial PACS are based on traditional IdM model and usually use low-cost devices such as smart cards. However, there is a trend to threat PACS as a (IT) service, i.e. unified physical and digital access [ 162 ]. Considering IoT scenarios, the translation of SSO authentication credentials for PACS across multiple domains (in a federation), is also a challenge due to interoperability, assurance and privacy concerns.

In the context of IoT, authorization mechanisms are based on access control models used in classic Internet such as Discretionary model, for example Access Control List (ACL) [ 163 ]), Capability Based Access Control (CapBAC) [ 164 , 165 ], Role Based Access Control (RBAC) [ 156 , 166 , 167 ] and Attribute Based Access Control (ABAC) [ 102 , 168 , 169 ]. ABAC and RBAC are the models better aligned to federated IdM and UbiComp systems. As proposed in [ 151 ], an IdM system that supports different access control models, such as RBAC and ABAC, can more easily adapt to the needs of the administration processes in the context of UbiComp.

Regarding policy management models to access devices, there are two approaches: provisioning [ 151 , 170 ] and outsourcing [ 150 , 151 , 171 , 172 ]. In provisioning, the device is responsible for the authorization decision making, which requires the policy to be in a local base. In this approach, Policy Enforcement Point (PEP), which controls the access to the device, and Policy Decision Point (PDP) are both in the same device. In outsourcing, the decision making takes place outside the device, in a centralized external service, that replies to all policy evaluation requests from all devices (PEPs) of a domain. In this case, the decision making can be offered as a service (PDPaaS) in the cloud or on premise [ 151 ].

For constrained devices, the provisioning approach is robust since it does not depend on an external service. However, in this approach, the decision making and the access policy management can be costly for the device. The outsourcing approach simplifies the policy management, but it has communication overhead and single point of failure (centralized PDP).

6.2 Federated identity management system

The IdM models guide the construction of policies and business processes for IdM systems but do not indicate which protocols or technologies should be adopted. SAML (Security Assertion Markup Language) [ 173 ], OAuth2 [ 174 ] and OpenId Connect specifications stand out in the federated IdM context [ 175 , 176 ] and are adequate for UbiComp systems. SAML, developed by OASIS, is an XML-based framework for describing and exchanging security information between business partners. It defines syntax and rules for requesting, creating, communicating and using SAML Assertions, which enables SSO across domain boundaries. Besides, SAML can describe authentication events that use different authentication mechanisms [ 177 ]. These characteristics are very important for the interoperability between security technologies of different administrative domains to be accomplished. According to [ 151 , 178 , 179 ], the first step toward achieving interoperability is the adoption of SAML. However, XML-based SAML is not a lightweight standard and has a high computational cost for IoT resource-constrained devices [ 176 ].

Enhanced Client and Proxy (ECP), a SAML profile, defines the security information exchange that involves clients who do not use a web browser and consequently allows device SSO authentication. Nevertheless, ECP requires SOAP protocol, which is not suitable due to its high computational cost [ 180 ]. Presumably, due to its computational cost, this profile is still not widely used in IoT devices.

OpenID Connect (OIDC) is an open framework that adopts user-centric and federated IdM models. It is decentralized, which means no central authority approves or registers SPs. With OpenID, an user can choose the OpenID Provider (IdP) he or she wants to use. OpenID Connect is a simple identity layer on top of the OAuth 2.0 protocol. It allows Clients (SPs) to verify user or device identity based on the authentication performed by an Authorization Server (OpenID Provider), as well as to obtain basic profile information about the user or device in an interoperable and REST-like manner [ 181 ]. OIDC uses JSON-based security token (JWT) that enables identity and security information to be shared across security domains, consequently it is a lightweight standard and suitable for IoT. Nevertheless, it is a developing standard that requires more time and enterprise acceptance to become a established standard [ 176 ].

An IoT architecture based on OpenID, which treats authentication and access control in a federated environment was proposed in [ 156 ]. Devices and users may register at a trusted third party of the home domain, which helps the user’s authentication process. In [ 182 ], OpenId connect is used for authentication and authorization of users and devices and to establish trust relationships among entities in an ambient assisted living environment (medical devices acting as a SP), in a federated approach.

SAML and OIDC are used for user authentication in Cloud platforms (Google, AWS, Azure). FIWARE platform Footnote 4 (an open source IoT platform), via Keyrock Identity Management Generic Enabler, which brings support to SAML and OAuth2-based for authentication of users. However, platforms usually use certification-based or token-based certification for device authentication using a centralized or traditional model. In future works, it may be interesting to perform practical investigations on SAML (ECP profile with different lightweight authentication mechanisms) and OIDC for various types of IoT devices and cross-domain scenarios and compare them with current authentication solutions.

OAuth protocol Footnote 5 is an open authorization framework that allows an user/ application to delegate Web resources to a third-party without sharing its credentials. With OAuth protocol it is possible to use a Json Web Token or a SAML assertion as a means for requesting an OAuth 2.0 access token as well as for client authentication [ 176 ]. Fremantle et al. [ 150 ] discusses the use of OAuth for IoT applications that use MQTT protocol, which is a lightweight message queue protocol (publish/subscribe model) for small sensors and mobile devices.

A known standard for authorization in distributed systems is XACML (eXtensible Access Control Markup Language). XACML is a language based on XML for authorization policy description and request/response for access control decisions. Authorization decisions may be based on user/device attributes, on requested actions, and environment characteristics. Such features enable the building of flexible authorization mechanisms. Furthermore, XACML is generic, regardless of the access control model used (RBAC, ABAC) and enables the use of a local authorization decision making (provisioning model) or by an external service provider (outsourcing model). Another important aspect is that there are profiles and extensions that provide interoperability between XACML and SAML [ 183 ].

6.3 Pervasive IdM challenges

Current federation technologies rely on preconfigured static agreements, which are not well-suited for the open environments in UbiComp scenarios. These limitations negatively impact scalability and flexibility [ 145 ]. Trust establishment is the key for scalability. Although FIM protocols can cover security aspects, dynamic trust relationship establishment are open question [ 145 ]. Some requirements, such as usability, device authentication and the use of lightweight cryptography, were not properly considered in Federated IdM solutions for UbiComp systems.

Interoperability is another key requirement for successful IdM system. UbiComp systems integrates heterogeneous devices that interact with humans, systems in the Internet, and with other devices, which leads to interoperability concerns. These systems can be formed by heterogeneous domains (organizations) that go beyond the barriers of a Federation with the same AAI. The interoperability between federations that use different federated identity protocols (SAML, OpenId and OAuth) is still a problem and also a research opportunity.

Lastly, IdM systems for UbiComp systems must appropriately protect user information and adopt proper personal data protection policies. Section 7 discusses the challenges to provide privacy in UbiComp systems.

7 Privacy implications

UbiComp systems tend to collect a lot of data and generate a lot of information. Correctly used, information generates innumerable benefits to our society that has provided us with a better life over the years. However, the information can be used for illicit purposes, just as computer systems are used for attacks. Protecting private information is a great challenge that can often seem impractical, for instance, protecting customers’ electrical consumption data from their electricity distribution company [ 184 – 186 ].

Ensuring security is a necessary condition for ensuring privacy, for instance, if the communication between clients and a service provider is not secure, then privacy is not guaranteed. However, it is not a sufficient condition, for instance, the communication is secure, but a service provider uses the data in a not allowed way. We can use cryptography to ensure secure as well as privacy. Nevertheless, even though one uses encrypted communication, the metadata from the network traffic might reveal private information. The first challenge is to find the extend of the data relevance and the impact of data leakage.

7.1 Application scenario challenges

Finding which data might be sensitive is a challenging task. Some cultures classify some data as sensitive when others classify the same data as public. Another challenge is to handle regulations from different countries.

7.1.1 Identifying sensitive data

Classifying what may be sensitive data might be a challenging task. The article 12 of the Universal Declaration of Human Rights proclaimed by the United Nations General Assembly in Paris on 10 December 1948 states: No one shall be subjected to arbitrary interference with his privacy, family, home, or correspondence, nor to attacks upon his honor and reputation. Everyone has the right to the protection of the law against such interference or attacks. Lawmakers have improved privacy laws around the world. However, there is still plenty of room for improvements, specially, when we consider data from people, animals, and products. Providers can use such data to profile and manipulate people and market. Unfair competitors might use private industrial data to get advantages over other industries.

7.1.2 Regulation

UbiComp systems tend to run worldwide. Thus, their developers need to deal with several laws from distinct cultures. The abundance of laws is a challenge for international institutions. The absence of laws too. On the one hand, the excess of laws compels institutions to handle a huge bureaucracy to follow several laws. On the other hand, the absence of laws causes unfair competition because unethical companies can use private data to get advantages over ethical companies. Business models must use privacy-preserving protocols to ensure democracy and avoid a surveillance society (see [ 187 ]). Such protocols are the solution for the dilemma between privacy and information. However, they have their own technological challenges.

7.2 Technological challenges

We can deal with already collected data from legacy systems or private-by-design data that are collected by privacy-preserving protocols, for instance, databases used in old systems and messages from privacy-preserving protocols, respectively. If a scenario can be classified as both, we can just tackle it as an already collected data in the short term.

7.3 Already collected data

One may use a dataset for information retrieval while keeping the anonymity of the true owners’ data. One may use data mining techniques over a private dataset. Several techniques are used in privacy preserving data mining [ 188 ]. ARX Data Anonymization Tool Footnote 6 is a very interesting tool for anonymization of already collected data. In the following, we present several techniques used to provide privacy in already collected data.

7.3.1 Anonymization

Currently, we have several techniques for anonymization and to evaluate the level of anonymization, for instance, k -anonymity, l -diversity, and t -closeness [ 189 ]. They use a set E from data indistinguishable for an identifier in a table.

The method k -anonymity suppresses table columns or replace them for keeping each E with at least k registers. It seems safe, but only 4 points marking the position on the time are enough to identify uniquely 95% of the cellphone users [ 190 ].

The method l -diversity requires that each E have at least l values “well-represented” for each sensitive column. Well-represented can be defined in three ways:

at least l distinct values for each sensitive column;

for each E , the Shannon entropy is limited, such that \(H(E)\geqslant \log _{2} l\) , where \(H(E)=-\sum _{s\in S}\Pr (E,s)\log _{2}(\Pr (E,s)),\) S is the domain of the sensitive column, and Pr( E , s ) is the probability of the lines in E that have sensitive values s ;

the most common values cannot appear frequently, and the most uncommon values cannot appear infrequently.

Note that some tables do not have l distinct sensitive values. Furthermore, the table entropy should be at least log2 l . Moreover, the frequency of common and uncommand values usually are not close to each other.

We say that E is t -closeness if the distance between the distribution of a sensitive column E end the distribution of column in all the table is not more than a threshold t . Thus, we say that a table has t -closeness if every E in a table have t -closeness. In this case, the method generates a trade-off between data usefulness and privacy.

7.3.2 Differential privacy

The idea of differential privacy is similar to the idea of indistinguishability in cryptography. For defining it, let ε be a positive real number and \(\mathcal {A}\) be a probabilistic algorithm with a dataset as input. We say that \(\mathcal {A}\) is ε -differentially private if for every dataset D 1 and D 2 that differ in one element, and for every subset S of the image of \(\mathcal {A}\) , we have \(\Pr \left [{\mathcal {A}}\left (D_{1}\right)\in S\right ]\leq e^{\epsilon }\times \Pr \left [{\mathcal {A}}\left (D_{2}\right)\in ~S\right ],\) where the probability is controlled for the algorithm randomness.

Differential privacy is not a metric in the mathematical sense. However, if the algorithms keep the probabilities based on the input, we can construct a metric d to compare the distance between two algorithms with \(d\left (\mathcal {A}_{1},\mathcal {A}_{2}\right)=|\epsilon _{1}-\epsilon _{2}|.\) In this way, we can determine if two algorithms as equivalents ε 1 = ε 2 , and we can determine the distance from an ideal algorithm computing

7.3.3 Entropy and the degree of anonymity

The degree of anonymity g can be measured with the Shannon entropy \(H(X)=\sum _{{i=1}}^{{N}}\left [p_{i}\cdot \log _{2} \left ({\frac {1}{p_{i}}}\right)\right ],\) where H ( X ) is the network entropy, N is the number of nodes, and p i is the probability for each node i . The maximal entropy happens when the probability is uniform, i.e., all nodes are equiprobably 1/ N , hence H M = log2( N ). Therefore, the anonymity degree g is defined by \(g=1-{\frac {H_{M}-H(X)}{H_{M}}}={\frac {H(X)}{H_{M}}}.\)

Similar to differential privacy, we can construct a metric to compare the distance between two networks computing d ( g 1 , g 2 )=| g 1 − g 2 |. Similarly, we can compare if they are equivalent g 1 = g 2 . Thus, we can determine the distance from an ideal anonymity network computing d ( g 1 , g ideal )=| g 1 −1|.

The network can be replaced by a dataset, but in this model, each register should have a probability.

7.3.4 Complexity

Complexity analysis also can be used as a metric to measure the time required in the best case for retrieving information from an anonymized dataset. It can also be used in private-by-design data as the time required to break a privacy-preserving protocol. The time measure can be done with asymptotical analysis or counting the number of steps to break the method.

All techniques have their advantages and disadvantages. However, even though the complexity prevents the leakage, even though the algorithm has differential privacy, even though the degree of anonymity is the maximum, privacy might be violated. For example, in an election with 3 voters, if 2 collude, then the third voters will have the privacy violated independent of the algorithm used. In [ 191 ], we find how to break protocols based on noise for smart grids, even when they are provided with the property of differential privacy.

Cryptography should ensure privacy in the same way that ensures security. An encrypted message should have maximum privacy metrics as well as cryptography ensures for security. We should use the best algorithm that leaks privacy and compute its worst-case complexity.

7.3.5 Probability

We can use probabilities to measure the chances of leakage. This approach is independent of algorithm used to protect privacy.

For example, consider an election with 3 voters. If 2 voters cast yes and 1 voter cast no, an attacker knows that the probability of a voter cast yes is 2/3 and for no is 1/3. The same logics applies if the number of voters and candidates grow.

Different from the case of yes and no, we may keep the privacy from valued measured. For attackers to discover the time series of three points, they represent each point for a number of stars, i.e., symbols ⋆ . Thus, attackers can split the total number of stars in three boxes. Let the sum of the series be 7, a probability would be ⋆ ⋆ ⋆ ⋆ ⋆ ⋆ ⋆ . For simplicity, attackers can split the stars by bars instead of boxes. Hence, ⋆ ⋆ ⋆ ⋆ | ⋆ | ⋆ ⋆ is the same solution. With such notation, the binomial of 7 stars plus 2 bars chosen 7 stars determines the possible number of solutions, i.e., \( {7+2 \choose 7}=\frac {9!}{7!(9-7)!}=36.\)

Generalizing, if t is the number of points in a time series and s its sum, then the number of possible time series for the attackers to decide the correct is determined by s plus t −1 chosen s , i.e.,

If we collect multiple time series, we can form a table, e.g., a list of candidates with the number of votes by states. The tallyman cold reveal only the total number of voter by state and the total number of votes by candidate, who could infer the possible number of votes by state [ 191 ]. Data from previous elections may help the estimation. The result of the election could be computed over encrypted data in a much more secure way than anonymization by k -anonymity, l -diversity, and t -closeness. Still, depending on the size of the table and its values, the time series can be found.

In general, we can consider measurements instead of values. Anonymity techniques try to reduce the number of measurements in the table. Counterintuitively, smaller the number of measurements, bigger the chances of discover them [ 191 ].

If we consider privacy by design, we do not have already collected data.

7.4 Private-by-design data

Messages is the common word for private-by-design data. Messages are transmitted data, processed, and stored. For privacy-preserving protocols, individual messages should not be leaked. CryptDB Footnote 7 is an interesting tool, which allows us to make queries over encrypted datasets. Although messages are stored in a dataset, they are encrypted messages with the users’ keys. To keep performance reasonable, privacy-preserving protocols aggregate or consolidate messages and solve a specific problem.

7.4.1 Computing all operators

In theory, we can compute a Turin machine over encrypted data, i.e., we can use a technique called fully homomorphic encryption [ 192 ] to compute any operator over encrypted data. The big challenge of fully homomorphic encryption is performance. Hence, constructing a fully homomorphic encryption for many application scenarios is a herculean task. The most usual operation is addition. Thus, most privacy-preserving protocols use additive homomorphic encryption [ 193 ] and DC-Nets (from “Dining Cryptographers”) [ 194 ]. Independent of the operation, the former generates functions, and the latter generates families of functions. We can construct an asymmetric DC-Net based on an additive homomorphic encryption [ 194 ].

7.4.2 Trade-off between enforcement and malleability

The privacy enforcement has a high cost. With DC-Nets, we can enforce privacy. However, every encrypted message need to be considered in the computation for users to decrypt and to access the protocol output. It is good for privacy but bad for fault tolerance. For illustration, consider an election where all voters need to vote. Homomorphic encryption enables protocols to decrypt and output even missing an encrypted message. Indeed, it enables the decryption of a single encrypted message. Therefore, homomorphic encryption cannot ensure privacy. For illustration, consider an election where one can read and change all votes. Homomorphic encryption techniques are malleable, and DC-Nets are non-malleable. On the one hand, mailability simplifies the process and improve fault tolerance but disables privacy enforcement. On the other hand, non-mailability enforces privacy but complicates the process and diminishes fault tolerance. In addition, the key distribution with homomorphic encryption is easier than with DC-Net schemes.

7.4.3 Key distribution

Homomorphic encryption needs a public-private key pair. Who owns the private key controls all the information. Assume that a receiver generates the key pair and send the public key to the senders in a secure communication channel. Thus, senders will use the same key to encrypt their messages. Since homomorphic encryption schemes are probabilistic, sender can use the same key to encrypt the same message that their encrypted messages will be different from each other. However, the receiver does not know who sent the encrypted messages.

DC-Net needs a private key for each user and a public key for the protocol. Since DC-Nets do not require senders and receiver, the users are usually named participants. They generate their own private key. Practical symmetric DC-Nets need that participants send a key to each other in a secure communication channel. Afterward, each participant has a private key given by the list of shared keys. Hence, each participant encrypts computing \(\mathfrak {M}_{i,j}\leftarrow \text {Enc}\left (m_{i,j}\right)=m_{i,j}+\sum _{o\in \mathcal {M}-\{i\}}\, \text {Hash}\left (s_{i,o}\ || \ j\right)-\text {Hash}\left (s_{o,i}\ || \ j\right),\) where m i , j is the message sent by the participant i in the time j , Hash is a secure hash function predefined by the participants, s i , o is the secret key sent from participant i to participant o , similarly, s o , i is the secret key received by i from o , and || is the concatenation operator. Each participant i can send the encrypted message \(\mathfrak {M}_{i,j}\) to each other. Thus, participants can decrypt the aggregated encrypted messages computing \(\text {Dec}=\sum _{i\in \mathcal {M}}\, \mathfrak {M}_{i,j}=\sum _{i\in \mathcal {M}}\, m_{i,j}.\) Note that if one or more messages are missing, the decryption is infeasible. Asymmetric DC-Nets do not require a private key based on shared keys. Each participant simply generates a private key. Subsequently, they use a homomorphic encryption or a symmetric DC-Net to add their private keys generating the decryption key.

Homomorphic encryption schemes have low overhead than DC-Nets for setting up keys and for distributing them. Symmetric DC-Nets need O ( I 2 ) messages to set up the keys, where I is the number of participants. Figure  5 depicts the messages to set up keys using (a) symmetric DC-Nets and (b) homomorphic encryption. Asymmetric DC-Nets can be settled easier than symmetric DC-Nets with the price of trusting the homomorphic encryption scheme.

figure 5

Setting up the keys. a Symmetric DC-Nets b Homomorphic encryption

7.4.4 Aggregation and consolidation

The aggregation and consolidation with DC-Nets are easier than with homomorphic encryption. Using DC-Nets, participants can just broadcast their encrypted messages or just send directly to an aggregator. Using homomorphic encryption, senders cannot send encrypted messages directly to the receiver, who can decrypt individual messages. Somehow, senders should aggregate the encrypted messages, and the receiver should receive only the encrypted aggregation, which is a challenge in homomorphic encryption and trivial in DC-Nets due to the trade-off described in Section 7.4.2 . In this work, we are referencing DC-Nets as fully connected DC-Nets. For non-fully connected DC-Nets, aggregation is based on trust and generates new challenges. Sometimes, aggregation and consolidation are used as synonym. However, consolidation is more complicated and generates more elaborate information than the aggregation. For example, the aggregation of the encrypted textual messages is just to join them, while the consolidation of encrypted textual messages generates a speech synthesis.

7.4.5 Performance

Fully homomorphic encryption tends to have big keys and requires a prohibitive processing time. On the contrary, asymmetric DC-Nets and partially homomorphic encryption normally use modular multi-exponentiations, which can be computed in logarithmic time [ 195 ]. Symmetric DC-Nets are efficient only for a small number of participants, because each participant need an iteration over the number of participants to encrypt a message. The number of participants is not relevant for asymmetric DC-Nets and for homomorphic encryption.

8 Forensics

Digital forensics is a branch of forensic science addressing the recovery and investigation of material found in digital devices. Evidence collection and interpretation play a key role in forensics. Conventional forensic approaches separately address issues related to computer forensics and information forensics. There is, however, a growing trend in security and forensics research that utilizes interdisciplinary approaches to provide a rich set of forensic capabilities to facilitate the authentication of data as well as the access conditions including who, when, where, and how.

In this trend, there are two major types of forensic evidences [ 196 ]. One type is intrinsic to the device, the information processing chain, or the physical environment, in such forms as the special characteristics associated with specific types of hardware or software processing or environment, the unique noise patterns as a signature of a specific device unit, certain regularities or correlations related to certain device, processing or their combinations, and more. Another type is extrinsic approaches, whereby specially designed data are proactively injected into the signals/data or into the physical world and later extracted and examined to infer or verify the hosting data’s origin, integrity, processing history, or capturing environment.

In mid of the convergence between digital and physical systems with sensors, actuators and computing devices becoming closely tied together, an emerging framework has been proposed as Proof-Carrying Sensing (PCS) [ 197 ]. This was inspired by Proof-Carrying Code, a trusted computing framework that associates foreign executables with a model to prove that they have not been tampered with and they function as expected. In the new UbiComp context involving cyber physical systems where mobility and resource constraints are common, the physical world can be leveraged as a channel that encapsulates properties difficult to be tampered with remotely, such as proximity and causality, in order to create a challenge-response function. Such a Proof-Carrying Sensing framework can help authenticate devices, collected data, and locations, and compared to traditional multifactor or out-of-band authentication mechanisms, it has a unique advantage that authentication proofs are embedded in sensor data and can be continuously validated over time and space at without running complicated cryptographic algorithms.

In terms of the above-mentioned intrinsic and extrinsic view point, the physical data available to establish a mutual trust in the PCS framework can be intrinsic to the physical environment (such as temperature, luminosity, noise, electrical frequency), or extrinsic to it, for example, they are actively injected by the device into the physical world. By monitoring the propagation of intrinsic or extrinsic data, a device can confirm its reception by other devices located within its vicinity. The challenge in designing and securely implementing such protocols can be addressed by the synergy of combined expertises such as signal processing, statistical detection and learning, cryptography, software engineering, and electronics.

To help appreciate the intrinsic and extrinsic evidences in addressing the security and forensics in UbiComp that involves both digital and physical elements, we now discuss two examples. Consider first an intrinsic signature of power grids. The electric network frequency (ENF) is the supply frequency of power distribution grids, with a nominal value of 60Hz (North America) or 50Hz (Europe). At any given time, the instantaneous value of ENF usually fluctuates around its nominal value as a result of the dynamic interaction between the load variations in the grid and the control mechanisms for power generation. These variations are nearly identical in all locations of the same grid at a given time due to the interconnected nature of the grid. The changing values of instantaneous ENF over time forms an ENF signal, which can be intrinsically captured by audio/visual recordings (Fig.  6 ) or other sensors [ 198 , 199 ]. This has led to recent forensic applications, such as validating the time-of-recording of an ENF-containing multimedia signal and estimating its recording location using concurrent reference signals from power grids based on the use of ENF signals.

figure 6

An example of intrinsic evidence related to the power grid. Showing here are spectrograms of ENF signals in concurrent recordings of a audio, b visual, and c power main. Cross-correlation study can show the similarity between media and power line reference at different time lags, where a strong peak appears at the temporal alignment of the matching grid

Next, consider the recent work by Satchidanandan and Kumar [ 200 ] introducing a notion of watermarking in a cyber-physical system, which can be viewed as a class of extrinsic signatures. If an actuator injects into the system a properly designed probing signal that is unknown in advance to other nodes in the system, then based on the knowledge of the cyber-physical system’s dynamics and other properties, the actuator can examine the sensors’ report about the signals at various points and can potentially infer whether there is malicious activity in the system or not, and if so, where and how.

A major challenge and research opportunity lies on discovering and characterizing suitable intrinsic and extrinsic evidences. Although qualitative properties of some signatures are known, it is important to develop quantitative models to characterize the normal and abnormal behavior in the context of the overall system. Along this line, the exploration of physical models might yield analytic approximations of such properties; and in the meantime, data-driven learning approaches can be used to gather statistical data characterizing normal and abnormal behaviors. Building on these elements, a strong synergy across the boundaries of traditionally separate domains of computer forensics, information forensics, and device forensics should be developed so as to achieve comprehensive capabilities of system forensics in UbiComp.

9 Conclusion

In the words of Mark Weiser, Ubiquitous Computing is “the idea of integrating computers seamlessly into the world at large” [ 1 ]. Thus, far from being a phenomenon from this time, the design and practice of UbiComp systems were already being discussed one quarter of a century ago. In this article, we have revisited this notion, which permeates the most varied levels of our society, under a security and privacy point of view. In the coming years, these two topics will occupy much of the time of researchers and engineers. In our opinion, the use of this time should be guided by a few observations, which we list below:

UbiComp software is often produced as the combination of different programming languages, sharing a common core often implemented in a type-unsafe language such as C, C++ or assembly. Applications built in this domain tend to be distributed, and their analysis, i.e., via static analysis tools, needs to consider a holistic view of the system.

The long-life span of some of these systems, coupled with the difficulty (both operational and cost-wise) to update and re-deploy them, makes them vulnerable to the inexorable progress of technology and cryptanalysis techniques. This brings new (and possibly disruptive) players to this discussion, such as quantum adversaries.

Key management is a critical component of any secure or private real-world system. After security roles and key management procedures are clearly defined for all entities in the framework, a set of matching cryptographic primitives must be deployed. Physical access and constrained resources complicate the design of efficient and secure cryptographic algorithms, which are often amenable to side-channel attacks. Hence, current research challenges in the space include more efficient key management schemes, in particular supporting some form of revocation; the design of lightweight cryptographic primitives which facilitate correct and secure implementation; cheaper side-channel resistance countermeasures made available through advances in algorithms and embedded architectures.

Given the increasing popularization of UbiComp systems, people become more and more dependent on their services for performing different commercial, financial, medical and social transactions. This rising dependence requires simultaneous high level of reliability, availability and security. This observation strengthens the importance of the design and implementation of resilient UbiComp systems.

One of the main challenges to providing pervasive IdM is to ensure the authenticity of devices and users and adaptive authorization in scenarios with multiple and heterogeneous security domains.

Several databases currently store sensitive data. Moreover, a vast number of sensors are constantly collecting new sensitive data and storing them in clouds. Privacy-preserving protocols are being designed and perfected to enhance user’s privacy in specific scenarios. Cultural interpretations of privacy, the variety of laws, big data from legacy systems in clouds, processing time, latency, key distribution and management, among other aforementioned are challenges for us to develop privacy-preserving protocols.

The convergence between the physical and digital systems poses both challenges and opportunities in offering forensic capabilities to facilitate the authentication of data as well as the access conditions including who, when, where, and how; a synergistic use of intrinsic and extrinsic evidences with interdisciplinary expertise will be the key.

Given these observations, and the importance of ubiquitous computing, it is easy to conclude that the future holds fascinating challenges waiting for the attention of the academia and the industry.

Finally, note the observations and the predictions presented in this work regarding how UbiComp may evolve represent our view of the field based on the technology landscape today. New scientific discoveries, technology inventions as well as economic, social, and policy factors may lead to new and/or different trends in the technology evolutionary paths.

https://competitions.cr.yp.to/caesar.html

https://csrc.nist.gov/projects/lightweight-cryptography

Deloitte’s annual Technology, Media and Telecommunications Predictions 2017 report: https://www2.deloitte.com/content/dam/Deloitte/global/Documents/Technology-Media-Telecommunications/ gx-deloitte-2017-tmt-predictions.pdf

https://www.fiware.org .

OAuth 2.0 core authorization framework is described by IETF in RFC 6749 and other specifications and profiles.

https://arx.deidentifier.org/

https://css.csail.mit.edu/cryptdb/

Abbreviations

Authentication and Authorization Infrastructure

Attribute Based Access Control

Access Control List

Advanced Encryption Standard

Capability Based Access Control

control flow graph

Certificateless cryptography

Distributed Denial of Service

Elliptic Curve Cryptography

Enhanced Client and Proxy

Electronic identity

Electric network frequency

Federated Identity Management Model

Hash-Based Signatures

Identity-based

Identity Management

Identity Provider

Identity Provider as a Service

Internet of things

Message Authentication Codes

Multi-factor authentication

Physical access control systems

Proof-Carrying Sensing

Policy Decision Point

Policy Decision Point as a Service

Policy Enforcement Point

Public key infrastructures

Post-quantum cryptography

Quantum cryptography

Role Based Access Control

Service Provider

Single Sign-On

Pervasive and ubiquitous computing

eXtensible Access Control Markup Language

Weiser M. The computer for the 21st century. Sci Am. 1991; 265(3):94–104.

Article   Google Scholar  

Weiser M. Some computer science issues in ubiquitous computing. Commun ACM. 1993; 36(7):75–84.

Lyytinen K, Yoo Y. Ubiquitous computing. Commun ACM. 2002; 45(12):63–96.

Estrin D, Govindan R, Heidemann JS, Kumar S. Next century challenges: Scalable coordination in sensor networks. In: MobiCom’99. New York: ACM: 1999. p. 263–70.

Google Scholar  

Pottie GJ, Kaiser WJ. Wireless integrated network sensors. Commun ACM. 2000; 43(5):51–8.

Ashton K. That ’Internet of Things’ Thing. RFiD J. 2009; 22:97–114.

Atzori L, Iera A, Morabito G. The internet of things: a survey. Comput Netw. 2010; 54(15):2787–805.

Article   MATH   Google Scholar  

Mann S. Wearable computing: A first step toward personal imaging. Computer. 1997; 30(2):25–32.

Martin T, Healey J. 2006’s wearable computing advances and fashions. IEEE Pervasive Comput. 2007; 6(1):14–6.

Lee EA. Cyber-physical systems-are computing foundations adequate. In: NSF Workshop On Cyber-Physical Systems: Research Motivation, Techniques and Roadmap, volume 2. Citeseer: 2006.

Rajkumar RR, Lee I, Sha L, Stankovic J. Cyber-physical systems: the next computing revolution. In: 47th Design Automation Conference. ACM: 2010.

Abowd GD, Mynatt ED. Charting past, present, and future research in ubiquitous computing. ACM Trans Comput Human Interact (TOCHI). 2000; 7(1):29–58.

Stajano F. Security for ubiquitous computing.Hoboken: Wiley; 2002.

Book   Google Scholar  

Pierce BC. Types and programming languages, 1st edition. Cambridge: The MIT Press; 2002.

MATH   Google Scholar  

Cousot P, Cousot R. Abstract interpretation: A unified lattice model for static analysis of programs by construction or approximation of fixpoints. In: POPL. New York: ACM: 1977. p. 238–52.

McMillan KL. Symbolic model checking. Norwell: Kluwer Academic Publishers; 1993.

Book   MATH   Google Scholar  

Leroy X. Formal verification of a realistic compiler. Commun ACM. 2009; 52(7):107–15.

Rice HG. Classes of recursively enumerable sets and their decision problems. Trans Amer Math Soc. 1953; 74(1):358–66.

Article   MathSciNet   MATH   Google Scholar  

Wilson RP, Lam MS. Efficient context-sensitive pointer analysis for c programs. In: PLDI. New York: ACM: 1995. p. 1–12.

Cadar C, Dunbar D, Engler D. KLEE: Unassisted and automatic generation of high-coverage tests for complex systems programs. In: OSDI. Berkeley: USENIX: 2008. p. 209–24.

Coppa E, Demetrescu C, Finocchi I. Input-sensitive profiling. In: PLDI. New York: ACM: 2012. p. 89–98.

Graham SL, Kessler PB, McKusick MK. gprof: a call graph execution profiler (with retrospective). In: Best of PLDI. New York: ACM: 1982. p. 49–57.

Godefroid P, Klarlund N, Sen K. Dart: directed automated random testing. In: PLDI. New York: ACM: 2005. p. 213–23.

Nethercote N, Seward J. Valgrind: a framework for heavyweight dynamic binary instrumentation. In: PLDI. New York: ACM: 2007. p. 89–100.

Luk C-K, Cohn R, Muth R, Patil H, Klauser A, Lowney G, Wallace S, Reddi VJ, Hazelwood K. Pin: Building customized program analysis tools with dynamic instrumentation. In: PLDI. New York: ACM: 2005. p. 190–200.

Rimsa AA, D’Amorim M, Pereira FMQ. Tainted flow analysis on e-SSA-form programs. In: CC. Berlin: Springer: 2011. p. 124–43.

Serebryany K, Bruening D, Potapenko A, Vyukov D. Addresssanitizer: a fast address sanity checker. In: ATC. Berkeley: USENIX: 2012. p. 28.

Russo A, Sabelfeld A. Dynamic vs. static flow-sensitive security analysis. In: CSF. Washington: IEEE: 2010. p. 186–99.

Carlini N, Barresi A, Payer M, Wagner D, Gross TR. Control-flow bending: On the effectiveness of control-flow integrity. In: SEC. Berkeley: USENIX: 2015. p. 161–76.

Klein G, Elphinstone K, Heiser G, Andronick J, Cock D, Derrin P, Elkaduwe D, Engelhardt K, Kolanski R, Norrish M, Sewell T, Tuch H, Winwood S. sel4: Formal verification of an os kernel. In: SOSP. New York: ACM: 2009. p. 207–20.

Jourdan J-H, Laporte V, Blazy S, Leroy X, Pichardie D. A formally-verified c static analyzer. In: POPL. New York: ACM: 2015. p. 247–59.

Soares LFG, Rodrigues RF, Moreno MF. Ginga-NCL: the declarative environment of the brazilian digital tv system. J Braz Comp Soc. 2007; 12(4):1–10.

Maas AJ, Nazaré H, Liblit B. Array length inference for c library bindings. In: ASE. New York: ACM: 2016. p. 461–71.

Fedrecheski G, Costa LCP, Zuffo MK. ISCE. Washington: IEEE: 2016.

Rellermeyer JS, Duller M, Gilmer K, Maragkos D, Papageorgiou D, Alonso G. The software fabric for the internet of things. In: IOT. Berlin, Heidelberg: Springer-Verlag: 2008. p. 87–104.

Furr M, Foster JS. Checking type safety of foreign function calls. ACM Trans Program Lang Syst. 2008; 30(4):18:1–18:63.

Dagenais B, Hendren L. OOPSLA. New York: ACM: 2008. p. 313–28.

Melo LTC, Ribeiro RG, de Araújo MR, Pereira FMQ. Inference of static semantics for incomplete c programs. Proc ACM Program Lang. 2017; 2(POPL):29:1–29:28.

Godefroid P. Micro execution. In: ICSE. New York: ACM: 2014. p. 539–49.

Manna Z, Waldinger RJ. Toward automatic program synthesis. Commun ACM. 1971; 14(3):151–65.

López HA, Marques ERB, Martins F, Ng N, Santos C, Vasconcelos VT, Yoshida N. Protocol-based verification of message-passing parallel programs. In: OOPSLA. New York: ACM: 2015. p. 280–98.

Bronevetsky G. Communication-sensitive static dataflow for parallel message passing applications. In: CGO. Washington: IEEE: 2009. p. 1–12.

Teixeira FA, Machado GV, Pereira FMQ, Wong HC, Nogueira JMS, Oliveira LB. Siot: Securing the internet of things through distributed system analysis. In: IPSN. New York: ACM: 2015. p. 310–21.

Lhoták O, Hendren L. Context-sensitive points-to analysis: Is it worth it? In: CC. Berlin, Heidelberg: Springer: 2006. p. 47–64.

Agha G. An overview of actor languages. In: OOPWORK. New York: ACM: 1986. p. 58–67.

Haller P, Odersky M. Actors that unify threads and events. In: Proceedings of the 9th International Conference on Coordination Models and Languages. COORDINATION’07. Berlin, Heidelberg: Springer-Verlag: 2007. p. 171–90.

Imam SM, Sarkar V. Integrating task parallelism with actors. In: OOPSLA. New York: ACM: 2012. p. 753–72.

Cousot P, Cousot R, Logozzo F. A parametric segmentation functor for fully automatic and scalable array content analysis. In: POPL. New York: ACM: 2011. p. 105–18.

Nazaré H, Maffra I, Santos W, Barbosa L, Gonnord L, Pereira FMQ. Validation of memory accesses through symbolic analyses. In: OOPSLA. New York: ACM: 2014.

Paisante V, Maalej M, Barbosa L, Gonnord L, Pereira FMQ. Symbolic range analysis of pointers. In: CGO. New York: ACM: 2016. p. 171–81.

Maalej M, Paisante V, Ramos P, Gonnord L, Pereira FMQ. Pointer disambiguation via strict inequalities. In: Proceedings of the 2017 International Symposium on Code Generation and Optimization, CGO ’17 . Piscataway: IEEE Press: 2017. p. 134–47.

Maalej M, Paisante V, Pereira FMQ, Gonnord L. Combining range and inequality information for pointer disambiguation. Sci Comput Program. 2018; 152(C):161–84.

Sui Y, Fan X, Zhou H, Xue J. Loop-oriented pointer analysis for automatic simd vectorization. ACM Trans Embed Comput Syst. 2018; 17(2):56:1–56:31.

Poovendran R. Cyber-physical systems: Close encounters between two parallel worlds [point of view]. Proc IEEE. 2010; 98(8):1363–6.

Conti JP. The internet of things. Commun Eng. 2006; 4(6):20–5.

Article   MathSciNet   Google Scholar  

Rinaldi SM, Peerenboom JP, Kelly TK. Identifying, understanding, and analyzing critical infrastructure interdependencies. IEEE Control Syst. 2001; 21(6):11–25.

US Bureau of Transportation Statistics BTS. Average age of automobiles and trucks in operation in the united states. 2017. Accessed 14 Sept 2017.

U.S. Department of Transportation. IEEE 1609 - Family of Standards for Wireless Access in Vehicular Environments WAVE. 2013.

Maurer M, Gerdes JC, Lenz B, Winner H. Autonomous driving: technical, legal and social aspects.Berlin: Springer; 2016.

Patel N. 90% of startups fail: Here is what you need to know about the 10%. 2015. https://www.forbes.com/sites/neilpatel/2015/01/16/90-of-startups-will-fail-heres-what-you-need-to-know-about-the-10/ . Accessed 09 Sept 2018.

Jacobsson A, Boldt M, Carlsson B. A risk analysis of a smart home automation system. Futur Gener Comput Syst. 2016; 56(Supplement C):719–33.

Rivest RL, Shamir A, Adleman LM. A method for obtaining digital signatures and public-key cryptosystems. Commun ACM. 1978; 21(2):120–6.

Miller VS. Use of elliptic curves in cryptography. In: CRYPTO, volume 218 of Lecture Notes in Computer Science. Berlin: Springer: 1985. p. 417–26.

Koblitz N. Elliptic curve cryptosystems. Math Comput. 1987; 48(177):203–9.

Barbulescu R, Gaudry P, Joux A, Thomé E. A heuristic quasi-polynomial algorithm for discrete logarithm in finite fields of small characteristic. In: EUROCRYPT 2014. Berlin: Springer: 2014. p. 1–16.

Diffie W, Hellman M. New directions in cryptography. IEEE Trans Inf Theor. 2006; 22(6):644–54.

Barker E. Federal Information Processing Standards Publication (FIPS PUB) 186-4 Digital Signature Standard (DSS). 2013.

Barker E, Johnson D, Smid M. Special publication 800-56A recommendation for pair-wise key establishment schemes using discrete logarithm cryptography. 2006.

Simon DR. On the power of quantum computation. In: Symposium on Foundations of Computer Science (SFCS 94). Washington: IEEE Computer Society: 1994. p. 116–23.

Knill E. Physics: quantum computing. Nature. 2010; 463(7280):441–3.

Grover LK. A fast quantum mechanical algorithm for database search. In: Proceedings of ACM STOC 1996. New York: ACM: 1996. p. 212–19.

Shor PW. Polynomial-time algorithms for prime factorization and discrete logarithms on a quantum computer. SIAM J Comput. 1997; 26(5):1484–509.

McEliece RJ. A public-key cryptosystem based on algebraic coding theory. Deep Space Netw. 1978; 44:114–6.

Merkle RC. Secrecy, authentication and public key systems / A certified digital signature. PhD thesis, Stanford. 1979.

Regev O. On lattices, learning with errors, random linear codes, and cryptography. In: Proceedings of ACM STOC ’05. STOC ’05. New York: ACM: 2005. p. 84–93.

Buchmann J, Dahmen E, Hülsing A. Xmss - a practical forward secure signature scheme based on minimal security assumptions In: Yang B-Y, editor. PQCrypto. Berlin: Springer: 2011. p. 117–29.

McGrew DA, Curcio M, Fluhrer S. Hash-Based Signatures. Internet Engineering Task Force (IETF). 2017. https://datatracker.ietf.org/doc/html/draft-mcgrew-hash-sigs-13 . Accessed 9 Sept 2018.

Bennett CH, Brassard G. Quantum cryptography: public key distribution and coin tossing. In: Proceedings of IEEE ICCSSP’84. New York: IEEE Press: 1984. p. 175–9.

Bos J, Costello C, Ducas L, Mironov I, Naehrig M, Nikolaenko V, Raghunathan A, Stebila D. Frodo: Take off the ring! practical, quantum-secure key exchange from LWE. Cryptology ePrint Archive, Report 2016/659. 2016. http://eprint.iacr.org/2016/659 .

Alkim E, Ducas L, Pöppelmann T, Schwabe P. Post-quantum key exchange - a new hope. Cryptology ePrint Archive, Report 2015/1092. 2015. http://eprint.iacr.org/2015/1092 .

Misoczki R, Tillich J-P, Sendrier N, PBarreto LSM. MDPC-McEliece: New McEliece variants from moderate density parity-check codes. In: IEEE International Symposium on Information Theory – ISIT’2013. Istambul: IEEE: 2013. p. 2069–73.

Hoffstein J, Pipher J, Silverman JH. Ntru: A ring-based public key cryptosystem. In: International Algorithmic Number Theory Symposium. Berlin: Springer: 1998. p. 267–88.

Bos J, Ducas L, Kiltz E, Lepoint T, Lyubashevsky V, Schanck JM, Schwabe P, Stehlé D. Crystals–kyber: a CCA-secure module-lattice-based KEM. IACR Cryptol ePrint Arch. 2017; 2017:634.

Aragon N, Barreto PSLM, Bettaieb S, Bidoux L, Blazy O, Deneuville J-C, Gaborit P, Gueron S, Guneysu T, Melchor CA, Misoczki R, Persichetti E, Sendrier N, Tillich J-P, Zemor G. BIKE: Bit flipping key encapsulation. Submission to the NIST Standardization Process on Post-Quantum Cryptography. 2017. https://csrc.nist.gov/Projects/Post-Quantum-Cryptography/Round-1-Submissions .

Barreto PSLM, Gueron S, Gueneysu T, Misoczki R, Persichetti E, Sendrier N, Tillich J-P. Cake: Code-based algorithm for key encapsulation. In: IMA International Conference on Cryptography and Coding. Berlin: Springer: 2017. p. 207–26.

Jao D, De Feo L. Towards quantum-resistant cryptosystems from supersingular elliptic curve isogenies. In: International Workshop on Post-Quantum Cryptography. Berlin: Springer: 2011. p. 19–34.

Costello C, Jao D, Longa P, Naehrig M, Renes J, Urbanik D. Efficient compression of sidh public keys. In: Annual International Conference on the Theory and Applications of Cryptographic Techniques. Berlin: Springer: 2017. p. 679–706.

Jao D, Azarderakhsh R, Campagna M, Costello C, DeFeo L, Hess B, Jalali A, Koziel B, LaMacchia B, Longa P, Naehrig M, Renes J, Soukharev V, Urbanik D. SIKE: Supersingular isogeny key encapsulation. Submission to the NIST Standardization Process on Post-Quantum Cryptography. 2017. https://csrc.nist.gov/Projects/Post-Quantum-Cryptography/Round-1-Submissions .

Galbraith SD, Petit C, Shani B, Ti YB. On the security of supersingular isogeny cryptosystems. In: International Conference on the Theory and Application of Cryptology and Information Security. Berlin: Springer: 2016. p. 63–91.

National Institute of Standards and Technology (NIST). Standardization Process on Post-Quantum Cryptography. 2016. http://csrc.nist.gov/groups/ST/post-quantum-crypto/ . Accessed 9 Sept 2018.

McGrew D, Kampanakis P, Fluhrer S, Gazdag S-L, Butin D, Buchmann J. State management for hash-based signatures. In: International Conference on Research in Security Standardization. Springer: 2016. p. 244–60.

Bernstein DJ, Hopwood D, Hülsing A, Lange T, Niederhagen R, Papachristodoulou L, Schneider M, Schwabe P, Wilcox-O’Hearn Z. SPHINCS: Practical Stateless Hash-Based Signatures. Berlin, Heidelberg: Springer Berlin Heidelberg; 2015. p. 368–97.

Barker E, Barker W, Burr W, Polk W, Smid M. Recommendation for key management part 1: General (revision 3). NIST Spec Publ. 2012; 800(57):1–147.

Waters B. Ciphertext-policy attribute-based encryption: An expressive, efficient, and provably secure realization. In: Public Key Cryptography. LNCS, 6571 vol.Berlin: Springer: 2011. p. 53–70.

Liu Z, Wong DS. Practical attribute-based encryption: Traitor tracing, revocation and large universe. Comput J. 2016; 59(7):983–1004.

Oliveira LB, Aranha DF, Gouvêa CPL, Scott M, Câmara DF, López J, Dahab R. Tinypbc: Pairings for authenticated identity-based non-interactive key distribution in sensor networks. Comput Commun. 2011; 34(3):485–93.

Kim T, Barbulescu R. Extended tower number field sieve: A new complexity for the medium prime case. In: CRYPTO (1). LNCS, 9814 vol.Berlin: Springer: 2016. p. 543–71.

Boneh D, Franklin MK. Identity-based encryption from the weil pairing. SIAM J Comput. 2003; 32(3):586–615.

Al-Riyami SS, Paterson KG. Certificateless public key cryptography. In: ASIACRYPT. LNCS, 2894 vol.Berlin: Springer: 2003. p. 452–73.

Boldyreva A, Goyal V, Kumar V. Identity-based encryption with efficient revocation. IACR Cryptol ePrint Arch. 2012; 2012:52.

Simplício Jr. MA, Silva MVM, Alves RCA, Shibata TKC. Lightweight and escrow-less authenticated key agreement for the internet of things. Comput Commun. 2017; 98:43–51.

Neto ALM, Souza ALF, Cunha ÍS, Nogueira M, Nunes IO, Cotta L, Gentille N, Loureiro AAF, Aranha DF, Patil HK, Oliveira LB. Aot: Authentication and access control for the entire iot device life-cycle. In: SenSys. New York: ACM: 2016. p. 1–15.

Mouha N. The design space of lightweight cryptography. IACR Cryptol ePrint Arch. 2015; 2015:303.

Daemen J, Rijmen V. The Design of Rijndael: AES - The Advanced Encryption Standard. Information Security and Cryptography. Berlin: Springer; 2002.

Grosso V, Leurent G, Standaert F-X, Varici K. Ls-designs: Bitslice encryption for efficient masked software implementations. In: FSE. LNCS, 8540 vol.Berlin: Springer: 2014. p. 18–37.

Dinu D, Perrin L, Udovenko A, Velichkov V, Großschädl J, Biryukov A. Design strategies for ARX with provable bounds: Sparx and LAX. In: ASIACRYPT (1). LNCS, 10031 vol.Berlin: Springer: 2016. p. 484–513.

Albrecht MR, Driessen B, Kavun EB, Leander G, Paar C, Yalçin T. Block ciphers - focus on the linear layer (feat. PRIDE). In: CRYPTO (1). LNCS, 8616 vol.Berlin: Springer: 2014. p. 57–76.

Beierle C, Jean J, Kölbl S, Leander G, Moradi A, Peyrin T, Sasaki Y, Sasdrich P, Sim SM. The SKINNY family of block ciphers and its low-latency variant MANTIS. In: CRYPTO (2). LNCS, 9815 vol.Berlin: Springer: 2016. p. 123–53.

Bogdanov A, Knudsen LR, Leander G, Paar C, Poschmann A, Robshaw MJB, Seurin Y, Vikkelsoe C. PRESENT: an ultra-lightweight block cipher. In: CHES. LNCS, 4727 vol.Berlin: Springer: 2007. p. 450–66.

Reis TBS, Aranha DF, López J. PRESENT runs fast - efficient and secure implementation in software. In: CHES, volume 10529 of Lecture Notes in Computer Science. Berlin: Springer: 2017. p. 644–64.

Aumasson J-P, Bernstein DJ. Siphash: A fast short-input PRF. In: INDOCRYPT. LNCS, 7668 vol.Berlin: Springer: 2012. p. 489–508.

Kölbl S, Lauridsen MM, Mendel F, Rechberger C. Haraka v2 - efficient short-input hashing for post-quantum applications. IACR Trans Symmetric Cryptol. 2016; 2016(2):1–29.

Aumasson J-P, Neves S, Wilcox-O’Hearn Z, Winnerlein C. BLAKE2: simpler, smaller, fast as MD5. In: ACNS. LNCS, 7954 vol.Berlin: Springer: 2013. p. 119–35.

Stevens M, Karpman P, Peyrin T. Freestart collision for full SHA-1. In: EUROCRYPT (1). LNCS, 9665 vol.Berlin: Springer: 2016. p. 459–83.

NIST Computer Security Division. SHA-3 Standard: Permutation-Based Hash and Extendable-Output Functions. FIPS Publication 202, National Institute of Standards and Technology, U.S. Department of Commerce, May 2014.

McGrew DA, Viega J. The security and performance of the galois/counter mode (GCM) of operation. In: INDOCRYPT. LNCS, 3348 vol.Berlin: Springer: 2004. p. 343–55.

Koblitz N. A family of jacobians suitable for discrete log cryptosystems. In: CRYPTO, volume 403 of LNCS. Berlin: Springer: 1988. p. 94–99.

Bernstein DJ. Curve25519: New diffie-hellman speed records. In: Public Key Cryptography. LNCS, 3958 vol.Berlin: Springer: 2006. p. 207–28.

Bernstein DJ, Duif N, Lange T, Schwabe P, Yang B-Y. High-speed high-security signatures. J Cryptographic Eng. 2012; 2(2):77–89.

Costello C, Longa P. Four \(\mathbb {Q}\) : Four-dimensional decompositions on a \(\mathbb {Q}\) -curve over the mersenne prime. In: ASIACRYPT (1). LNCS, 9452 vol.Berlin: Springer: 2015. p. 214–35.

Banik S, Bogdanov A, Regazzoni F. Exploring energy efficiency of lightweight block ciphers. In: SAC. LNCS, 9566 vol.Berlin: Springer: 2015. p. 178–94.

Dinu D, Corre YL, Khovratovich D, Perrin L, Großschädl J, Biryukov A. Triathlon of lightweight block ciphers for the internet of things. NIST Workshop on Lightweight Cryptography. 2015.

Kocher PC. Timing attacks on implementations of diffie-hellman, rsa, dss, and other systems. In: CRYPTO. LNCS, 1109 vol.Berlin: Springer: 1996. p. 104–13.

Rodrigues B, Pereira FMQ, Aranha DF. Sparse representation of implicit flows with applications to side-channel detection In: Zaks A, Hermenegildo MV, editors. Proceedings of the 25th International Conference on Compiler Construction, CC 2016, Barcelona, Spain, March 12-18, 2016. New York: ACM: 2016. p. 110–20.

Almeida JB, Barbosa M, Barthe G, Dupressoir F, Emmi M. Verifying constant-time implementations. In: USENIX Security Symposium. Berkeley: USENIX Association: 2016. p. 53–70.

Kocher PC, Jaffe J, Jun B. Differential power analysis. In: CRYPTO. LNCS, 1666 vol. Springer: 1999. p. 388–97.

Biham E, Shamir A. Differential fault analysis of secret key cryptosystems. In: CRYPTO. LNCS, 1294 vol.Berlin: Springer: 1997. p. 513–25.

Kim Y, Daly R, Kim J, Fallin C, Lee J-H, Lee D, Wilkerson C, Lai K, Mutlu O. Flipping bits in memory without accessing them: An experimental study of DRAM disturbance errors. In: ISCA. Washington, DC: IEEE Computer Society: 2014. p. 361–72.

Ishai Y, Sahai A, Wagner D. Private circuits: Securing hardware against probing attacks. In: CRYPTO. LNCS, 2729 vol. Springer: 2003. p. 463–81.

Balasch J, Gierlichs B, Grosso V, Reparaz O, Standaert F-X. On the cost of lazy engineering for masked software implementations. In: CARDIS. LNCS, 8968 vol.Berlin: Springer: 2014. p. 64–81.

Nogueira M, dos Santos AL, Pujolle G. A survey of survivability in mobile ad hoc networks. IEEE Commun Surv Tutor. 2009; 11(1):66–77.

Mansfield-Devine S. The growth and evolution of ddos. Netw Secur. 2015; 2015(10):13–20.

Thielman S, Johnston C. Major Cyber Attack Disrupts Internet Service Across Europe and US. https://www.theguardian.com/technology/2016/oct/21/ddos-attack-dyn-internet-denial-service . Accessed 3 July 2018.

DDoS attacks: For the hell of it or targeted – how do you see them off? http://www.theregister.co.uk/2016/09/22/ddos_attack_defence/ . Accessed 14 Feb 2017.

Santos AA, Nogueira M, Moura JMF. A stochastic adaptive model to explore mobile botnet dynamics. IEEE Commun Lett. 2017; 21(4):753–6.

Macedo R, de Castro R, Santos A, Ghamri-Doudane Y, Nogueira M. Self-organized SDN controller cluster conformations against DDoS attacks effects. In: 2016 IEEE Global Communications Conference, GLOBECOM, 2016, Washington, DC, USA, December 4–8, 2016. Piscataway: IEEE: 2016. p. 1–6.

Soto J, Nogueira M. A framework for resilient and secure spectrum sensing on cognitive radio networks. Comput Netw. 2015; 79:313–22.

Lipa N, Mannes E, Santos A, Nogueira M. Firefly-inspired and robust time synchronization for cognitive radio ad hoc networks. Comput Commun. 2015; 66:36–44.

Zhang C, Song Y, Fang Y. Modeling secure connectivity of self-organized wireless ad hoc networks. In: IEEE INFOCOM. Piscataway: IEEE: 2008. p. 251–5.

Salem NB, Hubaux J-P. Securing wireless mesh networks. IEEE Wirel Commun. 2006; 13(2):50–5.

Yang H, Luo H, Ye F, Lu S, Zhang L. Security in mobile ad hoc networks: challenges and solutions. IEEE Wirel Commun. 2004; 11(1):38–47.

Nogueira M. SAMNAR: A survivable architecture for wireless self-organizing networks. PhD thesis, Université Pierre et Marie Curie - LIP6. 2009.

ITU. NGN identity management framework: International Telecommunication Union (ITU); 2009. Recommendation Y.2720.

Lopez J, Oppliger R, Pernul G. Authentication and authorization infrastructures (aais): a comparative survey. Comput Secur. 2004; 23(7):578–90.

Arias-Cabarcos P, Almenárez F, Trapero R, Díaz-Sánchez D, Marín A. Blended identity: Pervasive idm for continuous authentication. IEEE Secur Priv. 2015; 13(3):32–39.

Bhargav-Spantzel A, Camenisch J, Gross T, Sommer D. User centricity: a taxonomy and open issues. J Comput Secur. 2007; 15(5):493–527.

Garcia-Morchon O, Kumar S, Sethi M, Internet Engineering Task Force. State-of-the-art and challenges for the internet of things security. Internet Engineering Task Force; 2017. https://datatracker.ietf.org/doc/html/draft-irtf-t2trg-iot-seccons-04 .

Torres J, Nogueira M, Pujolle G. A survey on identity management for the future network. IEEE Commun Surv Tutor. 2013; 15(2):787–802.

Hanumanthappa P, Singh S. Privacy preserving and ownership authentication in ubiquitous computing devices using secure three way authentication. In: Proceedings. International Conference on Innovations in Information Technology (IIT): 2012. p. 107–12.

Fremantle P, Aziz B, Kopecký J, Scott P. Federated identity and access management for the internet of things. In: 2014 International Workshop on Secure Internet of Things: 2014. p. 10–17.

Domenech MC, Boukerche A, Wangham MS. An authentication and authorization infrastructure for the web of things. In: Proceedings of the 12th ACM Symposium on QoS and Security for Wireless and Mobile Networks, Q2SWinet ’16. New York: ACM: 2016. p. 39–46.

Birrell E, Schneider FB. Federated identity management systems: A privacy-based characterization. IEEE Secur Priv. 2013; 11(5):36–48.

Nguyen T-D, Al-Saffar A, Huh E-N. A dynamic id-based authentication scheme. In: Proceedings. Sixth International Conference on Networked Computing and Advanced Information Management (NCM), 2010.2010. p. 248–53.

Gusmeroli S, Piccione S, Rotondi D. A capability-based security approach to manage access control in the internet of things. Math Comput Model. 2013; 58:1189–205.

Akram H, Hoffmann M. Supports for identity management in ambient environments-the hydra approach. In: Proceedings. 3rd International Conference on Systems and Networks Communications, 2008. ICSNC’08.2008. p. 371–7.

Liu J, Xiao Y, Chen CLP. Authentication and access control in the internet of things. In: Proceedings. 32nd International Conference on Distributed Computing Systems Workshops (ICDCSW) 2012.2012. p. 588–92.

Ndibanje B, Lee H-J, Lee S-G. Security analysis and improvements of authentication and access control in the internet of things. Sensors. 2014; 14(8):14786–805.

Kim Y-P, Yoo S, Yoo C. Daot: Dynamic and energy-aware authentication for smart home appliances in internet of things. In: Consumer Electronics (ICCE), 2015 IEEE International Conference on.2015. p. 196–7.

Markmann T, Schmidt TC, Wählisch M. Federated end-to-end authentication for the constrained internet of things using ibc and ecc. SIGCOMM Comput Commun Rev. 2015; 45(4):603–4.

Dasgupta D, Roy A, Nag A. Multi-factor authentication. Cham: Springer International Publishing; 2017. p. 185–233.

NIST. Digital Identity Guidelines. NIST Special Publication 800-63-3. 2017. https://doi.org/10.6028/NIST.SP.800-63-3 .

Dzurenda P, Hajny J, Zeman V, Vrba K. Modern physical access control systems and privacy protection. In: 2015 38th International Conference on Telecommunications and Signal Processing (TSP).2015. p. 1–5.

Guinard D, Fischer M, Trifa V. Sharing using social networks in a composable web of things. In: Proceedings. 8th IEEE International Conference on Pervasive Computing and Communications Workshops (PERCOM Workshops), 2010.2010. p. 702–7.

Rotondi D, Seccia C, Piccione S. Access control & IoT: Capability based authorization access control system. In: Proceedings. 1st IoT International Forum: 2011.

Mahalle PN, Anggorojati B, Prasad NR, Prasad R. Identity authentication and capability based access control (iacac) for the internet of things. J Cyber Secur Mob. 2013; 1(4):309–48.

Moreira Sá De Souza L, Spiess P, Guinard D, Köhler M, Karnouskos S, Savio D. Socrades: A web service based shop floor integration infrastructure. In: The internet of things. Springer: 2008. p. 50–67.

Jindou J, Xiaofeng Q, Cheng C. Access control method for web of things based on role and sns. In: Proceedings. IEEE 12th International Conference on Computer and Information Technology (CIT), 2012. Washington: IEEE Computer Society: 2012. p. 316–21.

Han Q, Li J. An authorization management approach in the internet of things. J Inf Comput Sci. 2012; 9(6):1705–13.

Zhang G, Liu J. A model of workflow-oriented attributed based access control. Int J Comput Netw Inf Secur (IJCNIS). 2011; 3(1):47–53.

do Prado Filho TG, Vinicius Serafim Prazeres C. Multiauth-wot: A multimodal service for web of things athentication and identification. In: Proceedings of the 21st Brazilian Symposium on Multimedia and the Web, WebMedia ’15. New York: ACM: 2015. p. 17–24.

Alam S, Chowdhury MMR, Noll J. Interoperability of security-enabled internet of things. Wirel Pers Commun. 2011; 61(3):567–86.

Seitz L, Selander G, Gehrmann C. Authorization framework for the internet-of-things. In: Proceedings. IEEE 14th International Symposium and Workshops on a World of Wireless, Mobile and Multimedia Networks (WoWMoM). Washington, DC: IEEE Computer Society: 2013. p. 1–6.

OASIS. Saml v2.0 executive overview. 2005. https://www.oasis-open.org/committees/download.php/13525/sstc-saml-exec-overview-2.0-cd-01-2col.pdf .

Hardt D. The oauth 2.0 authorization framework. RFC 6749, RFC Editor; 2012. http://www.rfc-editor.org/rfc/rfc6749.txt .

Maler E, Reed D. The venn of identity: Options and issues in federated identity management. IEEE Secur Priv. 2008; 6(2):16–23.

Naik N, Jenkins P. Securing digital identities in the cloud by selecting an apposite federated identity management from saml, oauth and openid connect. In: 2017 11th International Conference on Research Challenges in Information Science (RCIS). Piscataway: IEEE: 2017. p. 163–74.

OASIS. Authentication context for the oasis security assertion markup language (saml) v2.0. 2005. http://docs.oasis-open.org/security/saml/v2.0/saml-authn-context-2.0-os.pdf .

Paci F, Ferrini R, Musci A, Jr KS, Bertino E. An interoperable approach to multifactor identity verification. Computer. 2009; 42(5):50–7.

Pöhn D, Metzger S, Hommel W. Géant-trustbroker: Dynamic, scalable management of saml-based inter-federation authentication and authorization infrastructures In: Cuppens-Boulahia N, Cuppens F, Jajodia S, El Kalam AA, Sans T, editors. ICT Systems Security and Privacy Protection. Berlin, Heidelberg: Springer Berlin Heidelberg: 2014. p. 307–20.

Zeng D, Guo S, Cheng Z. The web of things: A survey. J Commun. 2011;6(6). http://ojs.academypublisher.com/index.php/jcm/article/view/jcm0606424438 .

The OpenID Foundation. Openid connect core 1.0. 2014. http://openid.net/specs/openid-connect-core-1\_0.html .

Domenech MC, Comunello E, Wangham MS. Identity management in e-health: A case study of web of things application using openid connect. In: 2014 IEEE 16th International Conference on e-Health Networking, Applications and Services (Healthcom). Piscataway: IEEE: 2014. p. 219–24.

OASIS. Extensible access control markup language (xacml) version 3.0. 2013. http://docs.oasis-open.org/xacml/3.0/xacml-3.0-core-spec-os-en.pdf .

Borges F, Demirel D, Bock L, Buchmann JA, Mühlhäuser M. A privacy-enhancing protocol that provides in-network data aggregation and verifiable smart meter billing. In: ISCC. USA: IEEE: 2014. p. 1–6.

Borges de Oliveira F. Background and Models. Cham: Springer International Publishing; 2017. p. 13–23.

Borges de Oliveira F. Reasons to Measure Frequently and Their Requirements. Cham: Springer International Publishing; 2017. p. 39–47.

Holvast J. The Future of Identity in the Information Society, volume 298 of IFIP Advances in Information and Communication Technology In: Matyáš V, Fischer-Hübner S, Cvrček D, Švenda P, editors. Berlin: Springer Berlin Heidelberg: 2009. p. 13–42.

Toshniwal D. Privacy preserving data mining techniques privacy preserving data mining techniques for hiding sensitive data hiding sensitive data: a step towards open data open data. Singapore: Springer Singapore: 2018. p. 205–12.

Li N, Li T, Venkatasubramanian S. t-closeness: Privacy beyond k-anonymity and l-diversity. In: 2007 IEEE 23rd International Conference on Data Engineering. USA: IEEE: 2007. p. 106–15.

De Montjoye Y-A, Hidalgo CA, Verleysen M, Blondel VD. Unique in the crowd: The privacy bounds of human mobility. Sci Rep. 2013; 3:1–5.

Borges de Oliveira F. Quantifying the aggregation size. Cham: Springer International Publishing; 2017. p. 49–60.

Gentry C. A Fully Homomorphic Encryption Scheme. Stanford: Stanford University; 2009. AAI3382729.

Borges de Oliveira F. A Selective Review. Cham: Springer International Publishing; 2017. p. 25–36.

Borges de Oliveira F. Selected Privacy-Preserving Protocols. Cham: Springer International Publishing; 2017. p. 61–100.

Borges F, Lara P, Portugal R. Parallel algorithms for modular multi-exponentiation. Appl Math Comput. 2017; 292:406–16.

MathSciNet   Google Scholar  

Stamm MC, Wu M, Liu KJR. Information forensics: An overview of the first decade. IEEE Access. 2013; 1:167–200.

Wu M, Quintão Pereira FM, Liu J, Ramos HS, Alvim MS, Oliveira LB. New directions: Proof-carrying sensing — Towards real-world authentication in cyber-physical systems. In: Proceedings of ACM Conf. on Embedded Networked Sensor Systems (SenSys). New York: ACM: 2017.

Grigoras C. Applications of ENF analysis in forensic authentication of digital audio and video recordings. J Audio Eng Soc. 2009; 57(9):643–61.

Garg R, Varna AL, Hajj-Ahmad A, Wu M. “seeing” enf: Power-signature-based timestamp for digital multimedia via optical sensing and signal processing. TIFS. 2013; 8(9):1417–32.

Satchidanandan B, Kumar PR. Dynamic watermarking: Active defense of networked cyber–physical systems. Proc IEEE. 2017; 105(2):219–40.

Download references

Acknowledgments

We would like to thank Artur Souza for contributing with fruitful discussions to this work.

This work was partially supported by the CNPq, NSF, RNP, FAPEMIG, FAPERJ, and CAPES.

Availability of data and materials

Data sharing not applicable to this article as no datasets were generated or analyzed during the current study.

Author information

Authors and affiliations.

UFMG, Av. Antônio Carlos, 6627, Prédio do ICEx, Anexo U, sala 6330 Pampulha, Belo Horizonte, MG, Brasil

Leonardo B. Oliveira

Federal University of Minas Gerais, Belo Horizonte, Campinas, Brasil

Fernando Magno Quintão Pereira

Intel Labs, Hillsboro, Campinas, Brasil

Rafael Misoczki

University of Campinas, Campinas, Brasil

Diego F. Aranha

National Laboratory for Scientific Computing, Campinas, Petrópolis, Brasil

Fábio Borges

Federal University of Paraná, Campinas, Curitiba, Brasil

Michele Nogueira

Universidade do Vale do Itajaí, Campinas, Florianópolis, Brasil

Michelle Wangham

University of Maryland, Maryland, USA

Microsoft Research, Redmond, MD, USA

You can also search for this author in PubMed   Google Scholar

Contributions

All authors wrote and reviewed the manuscript. Mainly, LBO focused on the introduction and the whole paper conception, FM focused on Software Protection, RM focused on Long-Term Security, DFA focused on Cryptographic Engineering, MN focused on Resilience, MW focused on Identity Management, FB focused on Privacy, MW focused on Forensics, JL focused on the conclusion and the whole paper conception. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Leonardo B. Oliveira .

Ethics declarations

Competing interests.

The authors declare that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Additional information

Authors’ information.

Leonardo B. Oliveira is an associate professor of the CS Department at UFMG, a visiting associate professor of the CS Department at Stanford, and a research productivity fellow of the Brazilian Research Council (CNPq). Leonardo has been awarded the Microsoft Research Ph.D. Fellowship Award, the IEEE Young Professional Award, and the Intel Strategic Research Alliance Award. He published papers on the security of IoT/Cyber-Physical Systems in publication venues like IPSN and SenSys, and he is the (co)inventor of an authentication scheme for IoT (USPTO Patent Application No. 62287832). Leonardo served as General Chair and TPC Chair of the Brazilian Symposium on Security (SBSeg) in 2014 and 2016, respectively, and as a member in the Advisory Board of the Special Interest Group on Information and Computer System Security (CESeg) of the Brazilian Computer Society. He is a member of the Technical Committee of Identity Management (CT-GId) of the Brazilian National Research and Education Network (RNP).

Fernando M. Q. Pereira is an associate professor at UFMG’s Computer Science Department. He got his Ph.D at the University of California, Los Angeles, in 2008, and since then does research in the field of compilers. He seeks to develop techniques that let programmers to produce safe, yet efficient code. Fernando’s portfolio of analyses and optimizations is available at http://cuda.dcc.ufmg.br/ . Some of these techniques found their way into important open source projects, such as LLVM, PHC and Firefox.

Rafael Misoczki is a Research Scientist at Intel Labs, USA. His work is focused on post-quantum cryptography and conventional cryptography. He contributes to international standardization efforts on cryptography (expert member of the USA delegation for ISO/IEC JTC1 SC27 WG2, expert member of INCITS CS1, and submitter to the NIST standardization competition on post-quantum cryptography). He holds a PhD degree from Sorbonne Universités (University of Paris - Pierre et Marie Curie), France (2013). He also holds an MSc. degree in Electrical Engineering (2010) and a BSc. degree in Computer Science (2008), both from the Universidade de São Paulo, Brazil.

Diego F. Aranha is an Assistant Professor in the Institute of Computing at the University of Campinas (Unicamp). He holds a PhD degree in Computer Science from the University of Campinas and has worked as a visiting PhD student for 1 year at the University of Waterloo. His professional experience is in Cryptography and Computer Security, with a special interest in the efficient implementation of cryptographic algorithms and security analysis of real world systems. Coordinated the first team of independent researchers capable of detecting and exploring vulnerabilities in the software of the Brazilian voting machine during controlled tests organized by the electoral authority. He received the Google Latin America Research Award for research on privacy twice, and the MIT TechReview’s Innovators Under 35 Brazil Award for his work in electronic voting.

Fábio Borges is Professor in the doctoral program at Brazilian National Laboratory for Scientific Computing (LNCC in Portuguese). He holds a Ph.D. degree in Doctor of Engineering (Dr.-Ing.) in the Department of Computer Science at TU Darmstadt, a master’s degree in Computational Modeling at LNCC, and a bachelor’s degree in mathematics at Londrina State University (UEL). Currently, he is developing research at the LNCC in the field of Algorithms, Security, Privacy, and Smart Grid. Further information is found at http://www.lncc.br/~borges/ .

Michele Nogueira is an Associate Professor of the Computer Science Department at Federal University of Paraná. She received her doctorate in Computer Science from the UPMC — Sorbonne Universités, Laboratoire d’Informatique de Paris VI (LIP6) in 2009. Her research interests include wireless networks, security and dependability. She has been working on providing resilience to self-organized, cognitive and wireless networks by adaptive and opportunistic approaches for many years. Dr. Nogueira was one of the pioneers in addressing survivability issues in self-organized wireless networks, being her works “A Survey of Survivability in Mobile Ad Hoc Networks” and “An Architecture for Survivable Mesh Networking” her prominent scientific contributions. She is an Associate Technical Editor for the IEEE Communications Magazine and the Journal of Network and Systems Management. She serves as Vice-chair for the IEEE ComSoc - Internet Technical Committee. She is an ACM and IEEE Senior Member.

Michelle S. Wangham is a Professor at University of Vale do Itajaí (Brazil). She received her M.Sc. and Ph.D. on Electrical Engineering from the Federal University of Santa Catarina (UFSC) in 2004. Recently, she was a Visiting Researcher at University of Ottawa. Her research interests are vehicular networks, security in embedded and distributed systems, identity management, and network security. She is a consultant of the Brazilian National Research and Education Network (RNP) acting as the coordinator of Identity Management Technical Committee (CT-GID) and member of Network Monitoring Technical Committee. Since 2013, she is coordinating the GIdLab project, a testbed for R&D in Identity Management.

Min Wu received the B.E. degree (Highest Honors) in electrical engineering - automation and the B.A. degree (Highest Honors) in economics from Tsinghua University, Beijing, China, in 1996, and the Ph.D. degree in electrical engineering from Princeton University in 2001. Since 2001, she has been with the University of Maryland, College Park, where she is currently a Professor and a University Distinguished Scholar-Teacher. She leads the Media and Security Team, University of Maryland, where she is involved in information security and forensics and multimedia signal processing. She has coauthored two books and holds nine U.S. patents on multimedia security and communications. Dr. Wu coauthored several papers that won awards from the IEEE, ACM, and EURASIP, respectively. She also received an NSF CAREER award in 2002, a TR100 Young Innovator Award from the MIT Technology Review Magazine in 2004, an ONR Young Investigator Award in 2005, a ComputerWorld “40 Under 40” IT Innovator Award in 2007, an IEEE Mac Van Valkenburg Early Career Teaching Award in 2009, a University of Maryland Invention of the Year Award in 2012 and in 2015, and an IEEE Distinguished Lecturer recognition in 2015–2016. She has served as the Vice President-Finance of the IEEE Signal Processing Society (2010–2012) and the Chair of the IEEE Technical Committee on Information Forensics and Security (2012–2013). She is currently the Editor-in-Chief of the IEEE Signal Processing Magazine. She was elected IEEE Fellow for contributions to multimedia security and forensics.

Jie Liu Dr. Jie Liu is a Principal Researcher at Microsoft AI and Research Redmond, WA. His research interests root in sensing and interacting with the physical world through computing. Examples include time, location, and energy awareness, and Internet/Intelligence of Things. He has published broadly in areas such as sensor networking, embedded devices, mobile and ubiquitous computing, and data center management. He has received 6 best paper awards in top academic conferences in these fields. In addition, he holds more than 100 patents. He is the Steering Committee chair of Cyber-Physical-System (CPS) Week, and ACM/IEEE IPSN, and a Steering Committee member of ACM SenSys. He is an Associate Editor of ACM Trans. on Sensor Networks, was an Associate Editor of IEEE Trans. on Mobile Computing, and has chaired a number of top-tier conferences. Among other recognitions, he received the Leon Chua Award from UC Berkeley in 2001; Technology Advance Award from (Xerox) PARC in 2003; and a Gold Star Award from Microsoft in 2008. He received his Ph.D. degree from Electrical Engineering and Computer Sciences, UC Berkeley in 2001, and his Master and Bachelor degrees from Department of Automation, Tsinghua University, Beijing, China. From 2001 to 2004, he was a research scientist in Palo Alto Research Center (formerly Xerox PARC). He is an ACM Distinguished Scientist and an IEEE Senior Member.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Cite this article.

Oliveira, L., Pereira, F., Misoczki, R. et al. The computer for the 21st century: present security & privacy challenges. J Internet Serv Appl 9 , 24 (2018). https://doi.org/10.1186/s13174-018-0095-2

Download citation

Received : 13 April 2018

Accepted : 27 August 2018

Published : 04 December 2018

DOI : https://doi.org/10.1186/s13174-018-0095-2

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Cryptography

computer in 21st century essay

Academia.edu no longer supports Internet Explorer.

To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to  upgrade your browser .

Enter the email address you signed up with and we'll email you a reset link.

  • We're Hiring!
  • Help Center

paper cover thumbnail

Mark Weiser's The Computer for the 21st Century: A Journal Review

Profile image of Sandra  San Carlos

Related Papers

José Julián Gómez Salazar

computer in 21st century essay

International Journal of Human-Computer Interaction

Neville Stanton

Allan Parsons

"Interdisciplinary collaboration, to include those who are not natural scientists, engineers and computer scientists, is inherent in the idea of ubiquitous computing, as formulated by Mark Weiser in the late 1980s and early 1990s. However, ubiquitous computing has remained largely a computer science and engineering concept, and its non-technical side remains relatively underdeveloped. The aim of the following is, first, to clarify the kind of interdisciplinary collaboration envisaged by Weiser. Second, the difficulties of understanding the everyday and weaving ubiquitous technologies into the fabric of everyday life until they are indistinguishable from it, as conceived by Weiser, are explored. The contributions of Anne Galloway, Paul Dourish and Philip Agre to creating an understanding of everyday life relevant to the development of ubiquitous computing are discussed, focusing on the notions of performative practice, embodied interaction and contextualisation. Third, it is argued that with the shift to the notion of ambient intelligence, the larger scale socio-economic and socio-political dimensions of context become more explicit, in contrast to the focus on the smaller scale anthropological study of social (mainly workplace) practices inherent in the concept of ubiquitous computing. This can be seen in the adoption of the concept of ambient intelligence within the European Union and in the focus on rebalancing (personal) privacy protection and (state) security in the wake of 11 September 2001. Fourth, the importance of adopting a futures-oriented approach to discussing the issues arising from the notions of ubiquitous computing and ambient intelligence is stressed, while the difficulty of trying to achieve societal foresight is acknowledged."

Journal for the Theory of Social Behaviour

Lars-Erik Janlert

Johannes Thumfart

Sprouts: Working Papers on Information …

Michel Avital

Personal and Ubiquitous …

David Frohlich

ACM SIGMOBILE Mobile Computing and Communications Review

Marianne Christine De Guzman

The Invisible Machine In a journal article written by Mark Weiser entitled The Computer for the 21 st Century, he tries to look beyond the role of ubiquitous computing in the future where technological advancements surpass its basic computing functions to a machine that is experienced by its users yet hidden from their lives. He introduced the concept of ubiquitous computing by referring to it as an " invisible machine " where computers are unconsciously integrated as a basic need in performing everyday tasks. He creates a visualized concept on how the 21 st century technology can revolutionize electronic devices in our lives. Based on the envisioned concept of ubiquitous computing, Weiser cited examples of inch-scale machines with an ability to render the fundamental functions of writing and display surfaces such as papers, boards, clocks, and Post-It notes. He referred to them as Pads, Tabs, and Boards which comprises the initial components of the ubiquitous computing platform. Weiser rigorously explained the functions of each hardware component and its implications to the creation of an embodied virtual reality. The purpose of the article was to provide knowledge on how ubiquitous computing, as an emerging mode of computer technology can introduce a faster and convenient way of performing tasks while providing smooth interactions between people and computers at the same time. Since the article was written in 1991, there were several surprising predictions that Weiser had laid out with respect to the booming growth of technology from the beginning of the 21 st century to the present. The late Mark Weiser, the author of the article, was the chief scientist of the XEROX PARC, a research and development company and the father of ubiquitous computing. His notable contribution in the field of computer science was the set of principles in describing the concept of ubiquitous computing. In this paper, the concept of Mark Weiser's " invisible machine " shall be explored and how it continuously shapes the evolution of modern technology and human life.

International Journal of Technoethics

Mariana Broens

In this article, the authors investigate, from an interdisciplinary perspective, possible ethical implications of the presence of ubiquitous computing systems in human perception/action. The term ubiquitous computing is used to characterize information-processing capacity from computers that are available everywhere and all the time, integrated into everyday objects and activities. The contrast in approach to aspects of ubiquitous computing between traditional considerations of ethical issues and the Ecological Philosophy view concerning its possible consequences in the context of perception/action are the underlying themes of this paper. The focus is on an analysis of how the generalized dissemination of microprocessors in embedded systems, commanded by a ubiquitous computing system, can affect the behaviour of people considered as embodied embedded agents.

RELATED PAPERS

20-001_Sukavirganta Ginting

Juhri Hendrawan

Yessica Martinez

Yuji Funakoshi

Patrik Langehanenberg

Bruno Macchiavello

Procedia Earth and Planetary Science

Nature Genetics

Guida Landoure

Revista Peruana de Medicina Experimental y Salud Pública

PATRICIA QUENTE CONDORI

BMC Public Health

Joseph Kojo Oduro

Saint-Clair Lima da Silva

The Journal of Operational Risk

Tertius de Wet

Psicologia e Saúde em Debate

Lenamar Fiorese

Štěpán Žádník

Lecture Notes in Computer Science

MICHEL MENARD

Tayfa Ahmed

Toye Daniel

Current Agriculture Research Journal

DR. Sangeeta Sankhalkar

Surface and Coatings Technology

Environmental Sciences Proceedings

Monika Aniszewska

Applied and Environmental Microbiology

Sybe Hartmans

Journal of Materials Science

Gulhan Ozbayoglu

Educación y Educadores

Miguel ángel Mendoza

Diablotexto Digital. Revista de crítica literaria

Diablotexto Digital

Alexander Soifer

RELATED TOPICS

  •   We're Hiring!
  •   Help Center
  • Find new research papers in:
  • Health Sciences
  • Earth Sciences
  • Cognitive Science
  • Mathematics
  • Computer Science
  • Academia ©2024

BUILDING SKILLS FOR LIFE

This report makes the case for expanding computer science education in primary and secondary schools around the world, and outlines the key challenges standing in the way. Through analysis of regional and national education systems at various stages of progress in implementing computer science education programs, the report offers transferable lessons learned across a wide range of settings with the aim that all students—regardless of income level, race, or sex—can one day build foundational skills necessary for thriving in the 21st century.

Download the full report

Introduction.

Access to education has expanded around the world since the late 1990s through the combined efforts of governments, bilateral and multilateral agencies, donors, civil society, and the private sector, yet education quality has not kept pace. Even before the COVID-19 pandemic led to school closures around the world, all young people were not developing the broad suite of skills they need to thrive in work, life, and citizenship (Filmer, Langthaler, Stehrer, & Vogel, 2018).

The impact of the pandemic on education investment, student learning, and longer-term economic outcomes threatens not only to dial back progress to date in addressing this learning crisis in skills development but also to further widen learning gaps within and between countries. Beyond the immediate and disparate impacts of COVID-19 on students’ access to quality learning, the global economic crisis it has precipitated will shrink government budgets, potentially resulting in lower education investment and impacting the ability to provide quality education (Vegas, 2020). There is also a concern that as governments struggle to reopen schools and/or provide sufficient distance-learning opportunities, many education systems will focus on foundational skills, such as literacy and numeracy, neglecting a broader set of skills needed to thrive in a rapidly changing, technologically-advanced world.

Among these broader skills, knowledge of computer science (CS) is increasingly relevant. CS is defined as “the study of computers and algorithmic processes, including their principles, their hardware and software designs, their [implementation], and their impact on society” (Tucker, 2003). 1 CS skills enable individuals to understand how technology works, and how best to harness its potential to improve lives. The goal of CS education is to develop computational thinking skills, which refer to the “thought processes involved in expressing solutions as computational steps or algorithms that can be carried out by a computer” (K-12 Computer Science Framework Steering Committee, 2016). CS education is also distinct from computer or digital literacy, in that it is more concerned with computer design than with computer use. For example, coding is a skill one would learn in a CS course, while creating a document or slideshow presentation using an existing program is a skill one would learn in a computer or digital literacy course.

Research has shown that students benefit from CS education by increasing college enrollment rates and developing problem-solving abilities (Brown & Brown, 2020; Salehi et al., 2020). Research has also shown that lessons in computational thinking improve student response inhibition, planning, and coding skills (Arfé et al., 2020). Importantly, CS skills pay off in the labor market through higher likelihood of employment and better wages (Hanson & Slaughter, 2016; Nager & Atkinson, 2016). As these skills take preeminence in the rapidly changing 21st century, CS education promises to significantly enhance student preparedness for the future of work and active citizenship.

The benefits of CS education extend beyond economic motivations. Given the increasing integration of technology into many aspects of daily life in the 21st century, a functional knowledge of how computers work—beyond the simple use of applications—will help all students.

Why expand CS education?

By this point, many countries have begun making progress toward offering CS education more universally for their students. The specific reasons for offering it will be as varied as the countries themselves, though economic arguments often top the list of motivations. Other considerations beyond economics, however, are also relevant, and we account for the most common of these varied motives here.

The economic argument

At the macroeconomic level, previous research has suggested that countries with more workers with ICT (information, communications, and technology) skills will have higher economic growth through increases in productivity (Maryska, Doucek, & Kunstova, 2012; Jorgenson & Vu, 2016). Recent global data indicate that there is a positive relationship between the share of a country’s workforce with ICT skills and its economic growth. For example, using data from the Organisation for Economic Cooperation and Development (OECD), we find that countries with a higher share of graduates from an ICT field tend to have higher rates of per capita GDP (Figure 1). The strength of the estimated relationship here is noteworthy: A one percentage point increase in the share of ICT graduates correlates with nearly a quarter percentage point increase in recent economic growth, though we cannot determine the causal nature of this relationship (if any). Nonetheless, this figure supports the common view that economic growth follows from greater levels of investment in technological education.

At the microeconomic level, CS skills pay off for individuals—both for those who later choose to specialize in CS and those who do not. Focusing first on the majority of students who pursue careers outside of CS, foundational training in CS is still beneficial. Technology is becoming more heavily integrated across many industrial endeavors and academic disciplines—not just those typically included under the umbrella of science, technology, engineering, and mathematics (STEM) occupations. Careers from law to manufacturing to retail to health sciences all use computing and data more intensively now than in decades past (Lemieux, 2014). For example, using data from Germany, researchers showed that higher education programs in CS compared favorably against many other fields of study, producing a relatively high return on investment for lower risk (Glocker and Storck, 2014). Notably, completing advanced training in CS is not necessary to attain these benefits; rather, even short introductions to foundational skills in CS can increase young students’ executive functions (Arfe et al., 2020). Further, those with CS training develop better problem-solving abilities compared to those with more general training in math and sciences, suggesting that CS education offers unique skills not readily developed in other more common subjects (Salehi et al., 2020).

For those who choose to pursue advanced CS studies, specializing in CS pays off both in employment opportunities and earnings. For example, data from the U.S. show workers with CS skills are less likely to be unemployed than workers in other occupations (Figure 2). Moreover, the average earnings for workers with CS skills are higher than for workers in other occupations (Figure 3). These results are consistent across multiple studies using U.S. data (Carnevale et al., 2013; Altonji et al., 2012) and international data (Belfield et al., 2019; Hastings et al., 2013; Kirkeboen et al., 2016). Further, the U.S. Bureau of Labor Statistics has projected that the market for CS professionals will continue to grow at twice the speed of the rest of the labor market between 2014 and 2024 (National Academies of Sciences, 2018).

A common, though inaccurate, perception about the CS field is that anybody with a passion for technology can succeed without formal training. There is a nugget of truth in this view, as many leaders of major technology companies including Bill Gates, Elon Musk, Mark Zuckerberg, and many others have famously risen to the top of the field despite not having bachelor’s degrees in CS. Yet, it is a fallacy to assume that these outliers are representative of most who are successful in the field. This misconception could lead observers to conclude that investments in universal CS education are, at best, ineffective: providing skills to people who would learn them on their own regardless, and spending resources on developing skills in people who will not use them. However, such conclusions are not supported by empirical evidence. Rather, across STEM disciplines, including CS, higher levels of training and educational attainment lead to stronger employment outcomes, on average, than those with lesser levels of training in the same fields (Altonji et al., 2016; Altonji and Zhong, 2021).

The inequality argument

Technology—and particularly unequal access to its benefits—has been a key driver of social and economic inequality within countries. That is, those with elite social status or higher wealth have historically gotten access to technology first for their private advantages, which tends to reinforce preexisting social and economic inequalities. Conversely, providing universal access to CS education and computing technologies can enable those with lower access to technological resources the opportunity to catch up and, consequently, mitigate these inequalities. Empirical studies have shown how technological skills or occupations, in particular, have reduced inequalities between groups or accelerated the assimilation of immigrants (Hanson and Slaughter, 2017; DeVol, 2016).

Technology and CS education are likewise frequently considered critical in narrowing income gaps between developed and developing countries. This argument can be particularly compelling for low-income countries, as global development gaps will only be expected to widen if low-income countries’ investments in these domains falter while high-income countries continue to move ahead. Rather, strategic and intensive technological investment is frequently seen as a key strategy for less-developed countries to leapfrog stages of economic development to quickly catch up to more advanced countries (Fong, 2009; Lee, 2019).

CS skills enable adaptation in a quickly changing world, and adaptability is critical to progress in society and the economy. Perhaps there is no better illustration of the ability to thrive and adapt than from the COVID-19 pandemic. The pandemic has forced closures of many public spaces across the globe, though those closures’ impacts have been disproportionately felt across workers and sectors. Workers with the skills and abilities to move their job functions online have generally endured the pandemic more comfortably than those without those skills. And even more importantly, the organizations and private companies that had the human capacity to identify how technology could be utilized and applied to their operations could adapt in the face of the pandemic, while those without the resources to pivot their operations have frequently been forced to close in the wake of pandemic-induced restrictions. Thus, the pandemic bestowed comparative benefits on those with access to technology, the skills to use it, and the vision to recognize and implement novel applications quickly, while often punishing those with the least access and resources (OECD, 2021).

Failing to invest in technology and CS education may result in constrained global competitiveness, leaving governments less able to support its citizens. We recognize that efforts to expand CS education will demand time and money of public officials and school leaders, often in the face of other worthy competing demands. Though the contemporary costs may even seem prohibitive in some cases, the costs of inaction (while less immediately visible) are also real and meaningful in most contexts.

Beyond economics

We expect the benefits of CS education to extend beyond economic motivations, as well. Many household activities that were previously performed in real life are now often performed digitally, ranging from banking, shopping, travel planning, and socializing. A functional knowledge of how computers work—beyond the simple use of applications—should benefit all students as they mature into adults given the increasing integration of technology into many aspects of daily life in the 21st century. For example, whether a person wants to find a job or a romantic partner, these activities frequently occur through the use of technology, and understanding how matching algorithms work make for more sophisticated technology users in the future. Familiarity with CS basic principles can provide users more flexibility in the face of constant innovation and make them less vulnerable to digital security threats or predators (Livingstone et al., 2011). Many school systems now provide lessons in online safety for children, and those lessons will presumably be more effective if children have a foundational understanding of how the internet works.

Global advances in expanding CS education

To better understand what is needed to expand CS education, we first took stock of the extent to which countries around the world have integrated CS education into primary and secondary schools, and how this varied by region and income level. We also reviewed the existing literature on integrating CS into K-12 education to gain a deeper understanding of the key barriers and challenges to expanding CS education globally. Then, we selected jurisdictions at various stages of progress in implementing CS education programs in from multiple regions of the world and income levels, and drafted in-depth case studies on the origins, key milestones, barriers, and challenges of CS expansion.

Progress in expanding CS education across the globe

As shown in Figure 4, the extent to which CS education is offered in primary and secondary schools varies across the globe. Countries with mandatory CS education are geographically clustered in Eastern Europe and East Asia. Most states and provinces in the U.S. and Canada offer CS on a school-to-school basis or as an elective course. Multiple countries in Western Europe offer CS education as a cross-curricular topic integrated into other subjects. Latin America and Central and Southeast Asia have the most countries that have announced CS education programs or pilot projects. Countries in Africa and the Middle East have integrated the least amount of CS education into school curricula. Nevertheless, the number of countries piloting programs or adopting CS curricula indicate a global trend of more education systems integrating the subject into their curriculum.

As expected, students living in higher-income countries generally have better access to CS education. As Figure 5 shows, 43 percent of high-income countries require students to learn CS education in primary and/or secondary schools. Additionally, high-income countries also offer CS as an elective course to the largest share of the population. A further 35 percent of high-income countries offer CS on a school-to-school basis while not making it mandatory for all schools. Interestingly, upper-middle income countries host the largest share of students (62 percent) who are required to learn CS at any point in primary or secondary schools. Presumably, many upper-middle income countries likely have national economic development strategies focused on expanding tech-related jobs, and thus see the need to expand the labor force with CS skills. By contrast, only 5 percent of lower-middle income countries require CS during primary or secondary school, while 58 percent may offer CS education on a school-to-school basis.

Key barriers and challenges to expand CS education globally

To expand quality CS education, education systems must overcome enormous challenges. Many countries do not have enough teachers who are qualified to teach CS, and even though there is growing interest among students to pursue CS, relatively few students pursue more advanced training like CS testing certifications (Department for Education, 2019) or CS undergraduate majors compared to other STEM fields like engineering or biology (Hendrickson, 2019). This is especially true for girls and underrepresented minorities, who generally have fewer opportunities to develop an interest in CS and STEM more broadly (Code.org & CSTA, 2018). Our review of the literature identified four key challenges to expanding CS education.:

1. Providing access to ICT infrastructure to students and educators

Student access to ICT infrastructure, including both personal access to computing devices and an internet connection, is critical to a robust CS education. Without this infrastructure, students cannot easily integrate CS skills into their daily lives, and they will have few opportunities to experiment with new approaches on their own.

However, some initiatives have succeeded by introducing elements of CS education in settings without adequate ICT infrastructure. For example, many educators use alternative learning strategies like CS Unplugged to teach CS and computational thinking when computers are unavailable (Bell & Vahrenhold, 2018). One study shows that analog lessons can help primary school students develop computational thinking skills (Harris, 2018). Even without laptops or desktop computers, it is still possible for teachers to use digital tools for computational thinking. In South Africa, Professor Jean Greyling of Nelson Mandela University Computing Sciences co-created Tanks, a game that uses puzzle pieces and a mobile application to teach coding to children (Ellis, 2021). This is an especially useful concept, as many households and schools in South Africa and other developing countries have smartphones and access to analog materials but do not have access to personal computers or broadband connectivity (McCrocklin, 2021).

Taking a full CS curriculum to scale, however, requires investing in adequate access to ICT infrastructure for educators and students (Lockwood & Cornell, 2013). Indeed, as discussed in Section 3, our analysis of numerous case studies indicates that ICT infrastructure in schools provides a critical foundation to expand CS education.

2. Ensuring qualified teachers through teacher preparation and professional development

Many education systems encounter shortages of qualified CS teachers, contributing to a major bottleneck in CS expansion. A well-prepared and knowledgeable teacher is the most important component for instruction in commonly taught subjects (Chetty et al. 2014 a,b; Rivkin et al., 2005). We suspect this is no different for CS, though major deficiencies in the necessary CS skills among the teacher workforce are evident. For example, in a survey of preservice elementary school teachers in the United States, only 10 percent responded that they understood the concept of computational thinking (Campbell & Heller, 2019). Until six years ago, 75 percent of teachers in the U.S. incorrectly considered “creating documents or presentations on the computer” as a topic one would learn in a CS course (Google & Gallup, 2015), demonstrating a poor understanding of the distinction between CS and computer literacy. Other case studies, surveys, and interviews have found that teachers in India, Saudi Arabia, the U.K., and Turkey self-report low confidence in their understanding of CS (Ramen et al., 2015; Alfayez & Lambert, 2019; Royal Society, 2017; Gülbahar & Kalelioğlu, 2017). Indeed, developing the necessary skills and confidence levels for teachers to offer effective CS instruction remains challenging.

To address these challenges, school systems have introduced continuous professional development (PD), postgraduate certification programs, and CS credentials issued by teacher education degree programs. PD programs are common approaches, as they utilize the existing teacher workforce to fill the needs for special skills, rather than recruiting specialized teachers from outside the school system. For example, the British Computing Society created 10 regional university-based hubs to lead training activities, including lectures and meetings, to facilitate collaboration as part of the network of excellence (Dickens, 2016; Heintz et al., 2016; Royal Society, 2017). Most hubs involve multi-day seminars and workshops meant to familiarize teachers with CS concepts and provide ongoing support to help teachers as they encounter new challenges in the classroom. Cutts et al. (2017) further recommend teacher-led PD groups so that CS teachers can form collaborative professional networks. Various teacher surveys have found these PD programs in CS helpful (Alkaria & Alhassan, 2017; Goode et al., 2014). Still, more evidence is needed on the effectiveness of PD programs in CS education specifically (Hill, 2009).

Less commonly, some education systems have worked with teacher training institutions to introduce certification schemes so teachers can signal their unique qualifications in CS to employers. This signal can make teacher recruitment more transparent and incentivize more teachers to pursue training. This approach does require, though, an investment in developing CS education faculty at the teacher training institution, which may be a critical bottleneck in many places (Delyser et al., 2018). Advocates of the approach have recommended that school systems initiate certification schemes quickly and with a low bar at first, followed by improvement over time (Code.org, 2017; Lang et al., 2013; Sentance & Csizmadia, 2017). Short-term recommendations include giving temporary licenses to teachers who meet minimum content and knowledge requirements. Long-term recommendations, on the other hand, encourage preservice teachers to take CS courses as part of their teaching degree programs or in-service teachers to take CS courses as part of their graduate studies to augment their skillset.2 Upon completing these courses, teachers would earn a full CS endorsement or certificate.

3. Fostering student engagement and interest in CS education

Surveys from various countries suggest that despite a clear economic incentive, relatively few K-12 students express interest in pursuing advanced CS education. For example, 3 out of 4 U.S. students in a recent survey declared no interest in pursuing a career in computer science. And the differences by gender are notable: Nearly three times as many male students (33 percent) compared to female students (12 percent) expressed interest in pursuing a computer science career in the future (Google & Gallup, 2020).

Generally, parents view CS education favorably but also hold distinct misconceptions. For instance, more than 80 percent of U.S. parents surveyed in a Google and Gallup (2016) study reported that they think CS is as important as any other discipline. Nevertheless, the same parents indicated biases around who should take CS courses: 57 percent of parents think that one needs to be “very smart” to learn CS (Google & Gallup, 2015). Researchers have equated this kind of thinking to the idea that some people could be inherently gifted or inept at CS, a belief that could discourage some students from developing an interest or talent in CS (McCartney, 2017). Contrary to this belief, Patitsas et al. (2019) found that only 5.8 percent of university-level exam distributions were multimodal, indicating that most classes did not have a measurable divide between those who were inherently gifted and those who were not. This signals that CS is no more specialized to specific groups of students than any other subject.

Fostering student engagement, however, does not equate to developing a generation of programmers. Employment projections suggest the future demand for workers with CS skills will likely outpace supply in the absence of promoting students’ interest in the field. Yet, no countries expand access to CS education with the expectation of turning all students into computer programmers. Forcing students into career paths that are unnatural fits for their interests and skill levels result in worse outcomes for students at the decision margins (Kirkeboen et al., 2016). Rather, current engagement efforts both expose students to foundational skills that help navigate technology in 21st century life and provide opportunities for students to explore technical fields.

A lack of diversity in CS education not only excludes some people from accessing high-paying jobs, but it also reduces the number of students who would enter and succeed in the field (Du & Wimmer, 2019). Girls and racial minorities have been historically underrepresented in CS education (Sax et al., 2016). Research indicates that the diversity gap is not due to innate talent differences among demographic groups (Sullivan & Bers, 2012; Cussó-Calabuig et al., 2017), but rather a disparity of access to CS content (Google & Gallup 2016; Code.org & CSTA, 2018; Du & Wimmer, 2019), widely held cultural perceptions, and poor representation of women and underrepresented minorities (URMs) among industry leaders and in media depictions (Google & Gallup, 2015; Ayebi-Arthur, 2011; Downes & Looker, 2011).

To help meet the demand for CS professionals, government and philanthropic organizations have implemented programs that familiarize students with CS. By increasing student interest among K-12 students who may eventually pursue CS professions, these strategies have the potential to address the well documented lack of diversity in the tech industry (Harrison, 2019; Ioannou, 2018). For example, some have used short, one-time lessons in coding to reduce student anxiety around CS. Of these lessons, perhaps the best known is Hour of Code, designed by Code.org. In multiple surveys, students indicated more confidence after exposure to this program (Phillips & Brooks, 2017; Doukaki et al., 2013; Lang et al., 2016). It is not clear, however, whether these programs make students more likely to consider semester-long CS courses (Phillips & Brooks, 2017; Lang et al., 2016).

Other initiatives create more time-intensive programs for students. The U.S. state of Georgia, for example, implemented a program involving after-school, weekend, and summer workshops over a six-year period. Georgia saw an increase in participation in the Advanced Placement (AP) CS exam during the duration of the program, especially among girls and URMs (Guzdial et al., 2014). Other states have offered similar programs, setting up summer camps and weekend workshops in universities to help high school students become familiar with CS (Best College Reviews, 2021). These initiatives, whether one-off introductions to CS or time-intensive programs, typically share the explicit goal of encouraging participation in CS education among all students, and especially girls and URMs.

Yet, while studies indicate that Hour of Code and summer camps might improve student enthusiasm for CS, they do not provide the kind of rigorous impact assessment one would need to make a definitive conclusion of their effectiveness. They do not use a valid control group, meaning that there is no like-for-like comparison to students who are similar except for no exposure to the program. It is not clear that the increase in girls and URMs taking CS would not have happened if it were not for Georgia’s after-school clubs.

4. Generating and using evidence on curriculum and core competencies, instructional methods, and assessment

There is no one-size-fits-all CS curriculum for all education systems, schools, or classrooms. Regional contexts, school infrastructure, prior access, and exposure to CS need to be considered when developing CS curricula and competencies (Ackovska et al., 2015). Some CS skills, such as programming language, require access to computer infrastructure that may be absent in some contexts (Lockwood & Cornell, 2013). Rather than prescribing a curriculum, the U.S. K-12 Computer Science Framework Steering Committee (2016) recommends foundational CS concepts and competencies for education systems to consider. This framework encourages curriculum developers and educators to create learning experiences that extend beyond the framework to encompass student interests and abilities.

There is increasing consensus around what core CS competencies students should master when they complete primary and secondary education. Core competencies that students may learn by the end of primary school include:

  • abstraction—creating a model to solve a problem;
  • generalization—remixing and reusing resources that were previously created;
  • decomposition—breaking a complex task into simpler subtasks;
  • algorithmic thinking—defining a series of steps for a solution, putting instructions in the correct sequence, and formulating mathematical and logical expressions;
  • programming—understanding how to code a solution using the available features and syntax of a programming language or environment; and
  • debugging—recognizing when instructions do not correspond to actions and then removing or fixing errors (Angeli, 2016).

Competencies that secondary school students may learn in CS courses include:

  • logical and abstract thinking;
  • representations of data, including various kinds of data structures;
  • problem-solving by designing and programming algorithms using digital devices;
  • performing calculations and executing programs;
  • collaboration; and,
  • ethics such as privacy and data security (Syslo & Kwiatkowska, 2015).

Several studies have described various methods for teaching CS core competencies. Integrated development environments are recommended especially for teaching coding skills (Florez et al., 2017; Saez-Lopez et al., 2016). 2 These environments include block-based programming languages that encourage novice programmers to engage with programming, in part by alleviating the burden of syntax on learners (Weintrop & Wilensky, 2017; Repenning, 1993). Others recommended a variety of teaching methods that blend computerized lessons with offline activities (Taub et al. 2009; Curzon et al., 2009, Ackovska et al., 2015). This approach is meant to teach core concepts of computational thinking while keeping students engaged in physical, as well as digital, environments (Nishida et al., 2009). CS Unplugged, for example, provides kinesthetic lesson plans that include games and puzzles that teach core CS concepts like decomposition and algorithmic thinking.

Various studies have also attempted to measure traditional lecture-based instruction for CS (Alhassan 2017; Cicek & Taspinar, 2016). 3 These studies, however, rely on small sample sizes wherein the experiment and control group each comprised of individual classes. More rigorous research is required to understand the effectiveness of teaching strategies for CS.

No consensus has emerged on the best ways to assess student competency in core CS concepts (So et al., 2019; Djambong & Freiman, 2016). Though various approaches to assessment are widely available—including classical cognitive tests, standardized tests in digital environments, and CS Unplugged activity tests—too many countries have yet to introduce regular assessments that may evaluate various curricula or instructional methods in CS. While several assessments have been developed for CS and CT at various grade levels as part of various research studies, there have been challenges to broader use. This is due to either a lack of large-scale studies using these assessments or diversity in programming environments used to teach programming and CS or simply a lack of interest in using objective tests of learning (as opposed to student projects and portfolios).

Fortunately, a growing number of organizations are developing standardized tests in CS and computational thinking. For example, the International Computer and Information Literacy Study included examinations in computational thinking in 2018 that had two 25-minute modules, where students were asked to develop a sequence of tasks in a program that related to a unified theme (Fraillon et al., 2020). The OECD’s PISA will also include questions in 2021 to assess computational thinking across countries (Schleicher & Partovi, 2019). The AP CS exam has also yielded useful comparisons that have indirectly evaluated CS teacher PD programs (Brown & Brown, 2019).

In summary, the current evidence base provides little consensus on the specific means of scaling a high-quality CS education and leaves wide latitude for experimentation. Consequently, in this report we do not offer prescriptions on how to expand CS education, even while arguing that expanding access to it generally is beneficial for students and the societies that invest in it. Given the current (uneven) distribution of ICT infrastructure and CS education resources, high-quality CS education may be at odds with expanded access. While we focus on ensuring universal access first, it is important to recognize that as CS education scales both locally and globally, the issues of curricula, pedagogies, instructor quality, and evaluation naturally become more pressing.

Lessons from education systems that have introduced CS education

Based on the available literature discussed in the previous section, we selected education systems that have implemented CS education programs and reviewed their progress through in-depth case studies. Intentionally, we selected jurisdictions at various levels of economic development, at different levels of progress in expanding CS education, and from different regions of the world. They include Arkansas (U.S.), British Columbia (Canada), Chile, England, Italy, New Brunswick (Canada), Poland, South Africa, South Korea, Thailand, and Uruguay. For each case, we reviewed the historical origins for introducing CS education and the institutional arrangements involved in CS education’s expansion. We also analyzed how the jurisdictions addressed the common challenges of ensuring CS teacher preparation and qualification, fostering student demand for CS education (especially among girls and URMs), and how they developed curriculum, identified core competencies, promoted effective instruction, and assessed students’ CS skills. In this section, we draw lessons from these case studies, which can be downloaded and read in full at the bottom of this page .

Figure 6 presents a graphical representation summarizing the trajectories of the case study jurisdictions as they expanded CS education. Together, the elements in the figure provide a rough approximation of how CS education has expanded in recent years in each case. For example, when South Korea focused its efforts on universal CS education in 2015, basic ICT infrastructure and broadband connectivity were already available in all schools and two CS education expansion policies had been previously implemented. Its movement since 2015 is represented purely in the vertical policy action space, as it moved up four intervals on the index. Uruguay, conversely, started expanding its CS education program t a lower level both in terms of ICT infrastructure (x-axis) and existing CS policies (y-axis). Since starting CS expansion efforts in 2007, though, it has built a robust ICT infrastructure in its school systems and implemented 4 of 7 possible policy actions.

Figure 6 suggests that first securing access to ICT infrastructure and broadband connectivity allows systems to dramatically improve access to and the quality of CS education. Examples include England, British Columbia, South Korea, and Arkansas. At the same time, Figure 6 suggests that systems that face the dual challenge of expanding ICT infrastructure and broadband connectivity and scaling the delivery of quality CS education, such as Chile, South Africa, Thailand, and Uruguay, may require more time and/or substantial investment to expand quality CS education to match the former cases. Even though Chile, Thailand, and especially Uruguay have made impressive progress since their CS education expansion efforts began, they continue to lag a few steps behind those countries that started with established ICT infrastructure in place.

Our analysis of these case studies surfaced six key lessons (Figure 7) for governments wishing to take CS education to scale in primary and secondary schools, which we discuss in further detail below.

1. Expanding tech-based jobs is a powerful lever for expanding CS education

In several of the case studies, economic development strategies were the underlying motivation to introduce or expand CS education. For example, Thailand’s 2017 20-year Strategic Plan marked the beginning of CS education in that country. The 72-page document, approved by the Thai Cabinet and Parliament, explained how Thailand could become a more “stable, prosperous, and sustainable” country and proposed to reform the education curriculum to prepare students for future labor demands (20-year National Strategy comes into effect, 2018). Similarly, Arkansas’s Governor Hutchinson made CS education a key part of his first campaign in 2014 (CS for All, n.d.), stating that “Through encouraging computer science and technology as a meaningful career path, we will produce more graduates prepared for the information-based economy that represents a wide-open job market for our young people” (Arkansas Department of Education, 2019).

Uruguay’s Plan Ceibal, named after the country’s national flowering tree, was likewise introduced in 2007 as a presidential initiative to incorporate technology in education and help close a gaping digital divide in the country. The initiative’s main objectives were to promote digital inclusion, graduate employability, a national digital culture, higher-order thinking skills, gender equity, and student motivation (Jara, Hepp, & Rodriguez, 2018)

Last, in 2018, the European Commission issued the Digital Education Action Plan that enumerated key digital skills for European citizens and students, including CS and computational thinking (European Commission, 2018). The plan encouraged young Europeans to understand the algorithms that underpin the technologies they use on a regular basis. In response to the plan, Italy’s 2018 National Indications and New Scenarios report included a discussion on the importance of computational thinking and the potential role of educational gaming and robotics in enhancing learning outcomes (Giacalone, 2019). Then, in 2019, the Italian Ministry of Education and the Parliament approved a legislative motion to include CS and computational thinking in primary school curricula by 2022 (Orizzontescuola, 2019).

In some cases, the impetus to expand CS education came more directly from demands from key stakeholders, including industry and parents. For example, British Columbia’s CS education program traces back to calls from a growing technology industry (Doucette, 2016). In 2016, the province’s technology sector employed 86,000 people—more than the mining, forestry, and oil and gas sectors combined, with high growth projections (Silcoff, 2016). The same year, leaders of the province’s technology companies revealed in interviews that access to talent had become their biggest concern (KPMG, 2016). According to a 2016 B.C. Technology Association report, the province needed 12,500 more graduates in CS from tertiary institutions between 2015 and 2021 to fill unmet demand in the labor market (Orton, 2018). The economic justification for improving CS education in the province was clear.

Growing parental demand helped create the impetus for changes to the CS curriculum in Poland. According to Kozlowski (2016), Polish parents perceive CS professions as some of the most desirable options for their children. And given the lack of options for CS education in schools, parents often seek out extracurricular workshops for their children to encourage them to develop their CS skills (Panskyi, Rowinska, & Biedron, 2019). The lack of in-school CS options for students created the push for curricular reforms to expand CS in primary and secondary schools. As former Minister of Education Anna Zalewska declared, Polish students “cannot afford to waste time on [the] slow, arduous task of building digital skills outside school [ and] only school education can offer systematic teaching of digital skills” (Szymański, 2016).

2. ICT in schools provides the foundation to expand CS education

Previous efforts to expand access to devices, connectivity, or basic computer literacy in schools provided a starting point in several jurisdictions to expand CS education. For example, the Uruguayan government built its CS education program after implementing expansive one-to-one computing projects, which made CS education affordable and accessible. In England, an ICT course was implemented in schools in the mid-1990s. These dedicated hours during the school day for ICT facilitated the expansion of CS education in the country.

The Chilean Enlaces program, developed in 1992 as a network of 24 universities, technology companies, and other organizations (Jara, Hepp, & Rodriguez, 2018; Sánchez & Salinas, 2008) sought to equip schools with digital tools and train teachers in their use (Severin, 2016). It provided internet connectivity and digital devices that enabled ICT education to take place in virtually all of Chile’s 10,000 public and subsidized private schools by 2008 (Santiago, Fiszbein, Jaramillo, & Radinger, 2017; Severin et al., 2016). Though Enlaces yielded few observable effects on classroom learning or ICT competencies (Sánchez & Salinas, 2008), the program provided the infrastructure needed to begin CS education initiatives years later.

While a history of ICT expansion can serve as a base for CS education, institutional flexibility to transform traditional ICT projects into CS education is crucial. The Chilean Enlaces program’s broader institutional reach resulted in a larger bureaucracy, slower implementation of new programs, and greater dependence on high-level political agendas (Severin, 2016). As a result, the program’s inflexibility prevented it from taking on new projects, placing the onus on the Ministry of Education to take the lead in initiating CS education. In Uruguay, Plan Ceibal’s initial top-down organizational structure enabled relatively fast implementation of the One Laptop per Child program, but closer coordination with educators and education authorities may have helped to better integrate education technology into teaching and learning. More recently, Plan Ceibal has involved teachers and school leaders more closely when introducing CS activities. In England, the transition from ICT courses to a computing curriculum that prioritized CS concepts, instead of computer literacy topics that the ICT teachers typically emphasized before the change, encountered some resistance. Many former ICT teachers were not prepared to implement the new program of study as intended, which leads us to the next key lesson.

3. Developing qualified teachers for CS education should be a top priority

The case studies highlight the critical need to invest in training adequate numbers of teachers to bring CS education to scale. For example, England took a modest approach to teacher training during the first five years of expanding its CS education K-12 program and discovered that its strategy fell short of its original ambitions. In 2013, the English Department for Education (DfE) funded the BCS to establish and run the Network of Excellence to create learning hubs and train a pool of “master” CS teachers. While over 500 master teachers were trained, the numbers were insufficient to expand CS education at scale. Then, in 2018 the DfE substantially increased its funding to establish the National Center for Computing Education (NCCE) and added 23 new computing hubs throughout England. Hubs offer support to primary and secondary computing teachers in their designated areas, including teaching, resources, and PD (Snowdon, 2019). In just over two years, England has come a long way toward fulfilling its goals of training teachers at scale with over 29,500 teachers engaged in some type of training (Teach Computing, 2020).

Several education systems partnered with higher education institutions to integrate CS education in both preservice and in-service teacher education programs. For example, two main institutions in British Columbia, Canada—the University of British Columbia and the University of Northern British Columbia—now offer CS courses in their pre-service teacher education programs. Similarly, in Poland, the Ministry of National Education sponsored teacher training courses in university CS departments. In Arkansas, state universities offer CS certification as part of preservice teacher training while partnering with the Arkansas Department of Education to host in-service professional development.

Still other systems partnered with nonprofit organizations to deliver teacher education programs. For instance, New Brunswick, Canada, partnered with the nonprofit organization Brilliant Labs to implement teacher PD programs in CS (Brilliant Labs, n.d.). In Chile, the Ministry of Education partnered with several nongovernmental organizations, including Code.org and Fundación Telefónica, to expand teacher training in CS education. Microsoft Philanthropies launched the Technology Education and Literacy in Schools (TEALS) in the United States and Canada to connect high school teachers to technology industry volunteers. The volunteer experts support instructors to learn CS independently over time and develop sustainable high school CS programs (Microsoft, n.d.).

To encourage teachers to participate in these training programs, several systems introduced teacher certification pathways in CS education. For example, in British Columbia, teachers need at least 24 credits of postsecondary coursework in CS education to be qualified to work in public schools. The Arkansas Department of Education incentivizes in-service teachers to attain certification through teaching CS courses and participating in approved PD programs (Code.org, CSTA, ECEP, 2019). In South Korea, where the teaching profession is highly selective and enjoys high social status, teachers receive comprehensive training on high-skill computational thinking elements, such as computer architecture, operating systems, programming, algorithms, networking, and multimedia. Only after receiving the “informatics–computer” teacher’s license may a teacher apply for the informatics teacher recruitment exam (Choi et al., 2015).

When faced with shortages of qualified teachers, remote instruction can provide greater access to qualified teachers. For example, a dearth of qualified CS teachers has been and continues to be a challenge for Uruguay. To address this challenge, in 2017, Plan Ceibal began providing remote instruction in computational thinking lessons for public school fifth and sixth graders and integrated fourth-grade students a year later. Students work on thematic projects anchored in a curricular context where instructors integrate tools like Scratch. 4 During the school year, a group of students in a class can work on three to four projects during a weekly 45-minute videoconference with a remote instructor, while another group can work on projects for the same duration led by the classroom teacher. In a typical week, the remote instructor introduces an aspect of computational thinking. The in-class teacher then facilitates activities like block-based programming, circuit board examination, or other exercises prescribed by the remote teacher (Cobo & Montaldo, 2018). 5 Importantly, Plan Ceibal implements Pensamiento Computacional, providing a remote instructor and videoconferencing devices at the request of schools, rather than imposing the curriculum on all classrooms (García, 2020). With the ongoing COVID-19 pandemic forcing many school systems across the globe to adopt remote instruction, at least temporarily, we speculate that remote learning is now well poised to become more common in expanding CS education in places facing ongoing teacher shortages.

4. Exposing students to CS education early helps foster demand, especially among underserved populations

Most education systems have underserved populations who lack the opportunity to develop an interest in CS, limiting opportunities later in life. For example, low CS enrollment rates for women at Italian universities reflect the gender gap in CS education. As of 2017, 21.6 percent and 12.3 percent of students completing bachelor’s degrees in information engineering and CS, respectively, were women (Marzolla, 2019). Further, female professors and researchers in these two subjects are also underrepresented. In 2018, only 15 percent and 24 percent of professors and researchers in CS and computer engineering, respectively, were women (Marzolla, 2019). Similar representation gaps at the highest levels of CS training are common globally. Thus, continuing to offer exposure to CS only in post-secondary education will likely perpetuate similar representation gaps.

To address this challenge, several education systems have implemented programs to make CS education accessible to girls and other underserved populations in early grades, before secondary school. For instance, to make CS education more gender balanced, the Italian Ministry of Education partnered with civil society organizations to implement programs to spur girls’ interest in CS and encourage them to specialize in the subject later (European Commission, 2009). An Italian employment agency (ironically named Men at Work) launched a project called Girls Code It Better to extend CS learning opportunities to 1,413 middle school girls across 53 schools in 2019 (Girls Code It Better, n.d.). During the academic year, the girls attended extracurricular CS courses before developing their own technologically advanced products and showcasing their work at an event at Bocconi University in Milan (Brogi, 2019). In addition to introducing the participants to CS, the initiative provided the girls with role models and generated awareness on the gender gap in CS education in Italy.

In British Columbia, students are exposed to computational thinking concepts as early as primary school, where they learn how to prototype, share, and test ideas. In the early grades of primary education, the British Columbia curriculum emphasizes numeracy using technology and information technology. Students develop numeracy skills by using models and learn information technology skills to apply across subjects. In kindergarten and first grade, curricular objectives include preparing students for presenting ideas using electronic documents. In grades 2 to 3, the curricular goals specify that students should “demonstrate an awareness of ways in which people communicate, including the use of technology,” in English language arts classes, as well as find information using information technology tools. By the time students are in grades 4 and 5, the curriculum expects students to focus more on prototyping and testing new ideas to solve a problem (Gannon & Buteau, 2018).

Several systems have also increased participation in CS education by integrating it as a cross-curricular subject. This approach avoids the need to find time during an already-packed school day to teach CS as a standalone subject. For example, in 2015, the Arkansas legislature began requiring elementary and middle school teachers to embed computational thinking concepts in other academic courses. As a result, teachers in the state integrate five main concepts of computational thinking into their lesson plans, including (1) problem-solving, (2) data and information, (3) algorithms and programs, (4) computers and communications, and, importantly, (5) community, global, and ethical impacts (Watson-Fisher, 2019). In the years following this reform, the share of African American students taking CS in high school reached 19.6 percent, a figure that slightly exceeds the percentage of African Americans among all students—a resounding sign of progress in creating student demand for CS education (Computer science on the rise in Arkansas schools, Gov. drafts legislation to make it a requirement for graduation, 2020).

After-school programs and summer camps, jointly organized with external partners, have also helped promote demand for CS education through targeted outreach programs to commonly underserved populations. For example, Microsoft Thailand has been holding free coding classes, Hour of Code, in partnership with nonprofit organizations, to encourage children from underprivileged backgrounds to pursue STEM education (Microsoft celebrates Hour of Code to build future ready generations in Asia, 2017). In the past decade, Microsoft has extended opportunities for ICT and digital skills development to more than 800,000 youth from diverse backgrounds—including those with disabilities and residents of remote communities (Thongnab, 2019). Their annual #MakeWhatsNext event for young Thai women showcases STEM careers and the growing demand for those careers (Making coding fun for Thailand’s young, 2018). Also in Thailand, Redemptorist Foundation for People with Disabilities, with over 30 years of experience working with differently abled communities in that country, expanded their services to offer computer trainings and information technology vocational certificate programs for differently abled youth (Mahatai, n.d.).

In British Columbia, Canada, the Ministry of Education and other stakeholders have taken steps to give girls, women, and aboriginal students the opportunity to develop an interest in CS education. For example, after-school programs have taken specific steps to increase girls’ participation in CS education. The UBC Department of Computer Science runs GIRLsmarts4tech, a program that focuses on giving 7th- grade girls role models and mentors that encourage them to pursue technology-related interests (GIRLsmarts4tech, n.d.). According to the latest census, in 2016, British Columbia’s First Nations and Indigenous Peoples (FNIP) population—including First Nations, Metis, and Inuits—was 270,585, an increase of 38 percent from 2006. With 42.5 percent of the FNIP population under 25, it is critical for the province to deliver quality education to this young and growing group (Ministry of Advanced Education, Skills and Training, 2018). To this end, part of the British Columbia curriculum for CS education incorporates FNIP world views, perspectives, knowledge, and practices in CS concepts. In addition, the B.C. based ANCESTOR project (AborigiNal Computer Education through STORytelling) has organized courses and workshops to encourage FNIP students to develop computer games or animated stories related to their culture and land (Westor & Binn, 2015).

As these examples suggest, private sector and nongovernmental organizations can play an important role in the expansion of CS education, an issue we turn to now.

5. Engaging key stakeholders can help address bottlenecks

In most reviewed cases, the private sector and nongovernmental organizations played a role in promoting the expansion of CS education. Technology companies not only helped to lobby for expanding CS education, but often provided much-needed infrastructure and subject matter expertise in the design and rollout of CS education. For example, Microsoft Thailand has worked with the Thai government since 1998 in various capacities, including contributing to the development and implementation of coding projects, digital skills initiatives, teacher training programs, and online learning platforms (Thongnab, 2019; Coding Thailand, n.d.). Since 2002, Intel’s Teach Thailand program has trained more than 150,000 teachers. Additionally, Google Coding Teacher workshops train educators on teaching computational thinking through CS Unplugged coding activities (EduTech Thailand, 2019). The workshop is conducted by Edutech (Thailand) Co., Ltd., an educational partner of Google, which adapted the Google curriculum to the Thailand education context. Samsung has been engaged in a smart classroom project that has built futuristic classroom prototypes and provided training for 21st century competencies (OECD/UNESCO, 2016).

In England, nongovernmental organizations have played an important role in supporting the government’s expansion of CS education. The DfE has relied on outside organizations for help in executing its CS education responsibilities. The DfE’s NCEE, for instance, is delivered by a consortium including the British Computing Society, STEM Learning, and the Raspberry Pi Foundation—three nonprofit organizations dedicated to advancing the computing industry and CS education in the country (British Computing Society, n.d; STEM Learning, n.d.; Raspberry Pi Foundation, n.d.).

Chile’s Ministry of Education developed partnerships with individual NGOs and private companies to engage more students, especially girls. These initiatives offer the opportunity for hands-on learning projects and programming activities that students can perform from their home computers. Some of the same partners also provide online training platforms for teacher PD.

Industry advocacy organizations can also play an important role in the expansion of CS education. For example, in Arkansas, the state’s business community has long supported CS education (Nix, 2017). Accelerate Arkansas was established in 2005 as an organization of 70 private and public sector members dedicated to moving Arkansas into a more innovation- and knowledge-based economy (State of Arkansas, 2018). Similarly, in England, a network of organizations called Computing at School established a coalition of industry representatives and teachers. It played a pivotal role in rebranding the ICT education program in 2014 to the computing program that placed a greater emphasis on CS (Royal Society, 2017).

To ensure sustainability, one key lesson is that the government should coordinate across multiple stakeholders. The reliance on inputs from external organizations to drive CS education implies that the heavy reliance on NGO-provided training and resources in Chile have been insufficient to motivate more schools and teachers to include CS and computational thinking in classroom learning activities. By contrast, the DfE has effectively coordinated across various nongovernmental organizations to expand CS education. Similarly, Arkansas’s Department of Education is leading an effort to get half of all school districts to form partnerships with universities and business organizations to give students opportunities to participate internships and college-level CS courses while in high school (Talk Business & Politics, 2020). In sum, the experience of decades of educational policies across the education systems reviewed shows that schools require long lasting, coordinated, and multidimensional support to achieve successful implementation of CS in classrooms.

6. When taught in an interactive, hands-on way, CS education builds skills for life

Several of the cases studied introduced innovative pedagogies using makerspaces (learning spaces with customizable layouts and materials) and project-based learning to develop not only skills specific to CS but also skills that are relevant more broadly for life. For example, Uruguayan CS education features innovative concepts like robotics competitions and makerspaces that allow students to creatively apply their computational thinking lessons and that can spark interest and deepen understanding. In addition, computational thinking has been integrated across subject areas (e.g., in biology, math, and statistics) (Vázquez et al., 2019) and in interdisciplinary projects that immerse students in imaginative challenges that foster creative, challenging, and active learning (Cobo & Montaldo, 2018). For instance, students can use sensors and program circuit boards to measure their own progress in physical education (e.g., measuring how many laps they can run in a given period).

Similarly, in New Brunswick, Brilliant Labs provide learning materials to schools so they can offer students CS lessons using makerspaces that encourage students to develop projects, engage with technology, learn, and collaborate. These makerspaces enable students to creatively apply their CS and computational thinking lessons, sparking interest and deepening understanding of CS and computational thinking.

Thailand’s curricular reforms also integrated project-based learning into CS education. Thai students in grades 4-6 learn about daily life through computers, including skills such as using logic in problem-solving, searching data and assessing its correctness, and block coding (e.g., Scratch). Then, students in grades 7-9 focus on learning about primary data through objectives that include using programming to solve problems, collecting, analyzing, presenting, and assessing data and information, and textual programming such as Python. Finally, students in grades 10-12 focus on applying advanced computing technology and programming to solve real-world problems, using knowledge from other subjects and data from external sources (Piamsa-nga et al., 2020).

After two years of nationwide discussions from 2014 to 2016, the Polish Ministry of National Education announced the creation of a new core curriculum for CS in primary and secondary schools (Syslo, 2020). The new curriculum’s goals included students using technology to identify solutions for problems in every day and professional situations and supporting other disciplines—such as science, the arts, and the social sciences—in innovation (Panskyi, Rowinska, & Biedron, 2019).

CS skills are increasingly necessary to function in today’s technology-advanced world and for the future. They enable individuals to understand how technology works, and how best to harness its potential to improve lives. As these skills take preeminence in the rapidly changing 21st century, CS education promises to significantly enhance student preparedness for the future of work and active citizenship.

Our findings suggest six recommendations for governments interested in taking CS education to scale in primary and secondary schools. First, governments should use economic development strategies focused on expanding technology-based jobs to engage all stakeholders and expand CS education in primary and secondary schools. Indeed, such a strategy helps attract and retain investors and foster CS education demand among students. Second, provide access to ICT infrastructure in primary and secondary schools to facilitate the introduction and expansion of CS education. Third, developing qualified teachers for CS should be a top priority. The evidence is clear that a qualified teacher is the most important factor in student learning, and thus preparing the teacher force needed for CS at scale is crucial. Fourth, expose students early to CS education to increase their likelihood of pursuing it. This is especially important for girls and other URM groups historically underrepresented in STEM and CS fields. Fifth, engage key stakeholders (including educators, the private sector, and civil society) to help address bottlenecks in physical and technical capacity. Finally, teach CS in an interactive, hands-on way to build skills for life.

Through studying the cases of regional and national governments at various levels of economic development and progress in implementing CS education programs, governments from around the globe can learn how to expand and improve CS education and help students develop a new basic skill necessary for the future of work and active citizenship.

Case studies

For a detailed discussion of regional and national education systems from diverse regions and circumstances that have implemented computer science education programs, download the case studies.

file-pdf Arkansas file-pdf British Columbia file-pdf Chile file-pdf England file-pdf Italy file-pdf New Brunswick file-pdf South Korea file-pdf South Africa file-pdf Uruguay

About the Authors

Emiliana vegas, co-director – center for universal education, michael hansen, senior fellow – brown center on education policy, brian fowler, former research analyst – center for universal education.

  • 1. Denning et al. (1989) defined the discipline of computing as “the systematic study of algorithmic processes that describe and transform information: their theory, analysis, design, efficiency, implementation, and application.”
  • 2. Integrated development environments include programs like Scratch (Resnick et al., 2009), Code.org (Kelelioglu, 2015), and CHERP3 Creative Hybrid Environment for Robotics Programming (Bers et al., 2014).
  • 3. The authors of these studies conclude that self-teaching methods and laboratory control methods may be effective for teaching programming skills.
  • 4. In 2019, President Tabaré Vázquez stated that “All children in kindergartens and schools are programming in Scratch, or designing strategies based on problem-solving” (Uruguay Presidency, 2019).
  • 5. Remote instruction via videoconferencing technology improved learning in mathematics in an experiment in Ghana (Johnston & Ksoll, 2017). It is very plausible that Uruguay’s approach to giving computational thinking instruction via videoconference could also be effective.

Acknowledgments

The Brookings Institution is a nonprofit organization devoted to independent research and policy solutions. Its mission is to conduct high-quality, independent research and, based on that research, to provide innovative, practical recommendations for policymakers and the public. The conclusions and recommendations of any Brookings publication are solely those of its author(s), and do not reflect the views of the Institution, its management, or its other scholars.

Brookings gratefully acknowledges the support provided by Amazon, Atlassian Foundation International, Google, and Microsoft.

Brookings recognizes that the value it provides is in its commitment to quality, independence, and impact. Activities supported by its donors reflect this commitment.

Related content

computer in 21st century essay

Expanding computer science education for a technologically advancing world

On October 26, the Center for Universal Education (CUE) will host a virtual event to launch the report “Building skills for life: How to expand and improve computer science education around the world.”

computer in 21st century essay

What do we know about the expansion of K-12 computer science education?

This brief reviews various efforts around the world to improve and scale computer science education.

computer in 21st century essay

Realizing the promise: How can education technology improve learning for all?

This research is intended as an evidence-based tool for ministries of education to adopt and more successfully invest in education technology.

  • Media Relations
  • Terms and Conditions
  • Privacy Policy

chrome icon

The computer for the 21st century

Content maybe subject to  copyright     Report

722  citations

690  citations

View 2 citation excerpts

Cites background from "The computer for the 21st century"

... ciated with “The Computer for the 21st Century” [3] or “Computing for Human Experience” [4]) IoT needs data to either represent better services to users or enhance IoT framework performance to accomplish this intelligently. ...

... One of the long-standing objectives of computing is to simplify and enrich human activities and experiences (e.g., see the visions associated with “The Computer for the 21st Century” [3] or “Computing for Human Experience” [4]). ...

647  citations

610  citations

591  citations

Cites methods from "The computer for the 21st century"

... These are mostly based on a standard protocol called Deluge [26]. ...

... Weiser [26] and weiser et al. ...

66  citations

16  citations

7  citations

5  citations

Related Papers (5)

Ask Copilot

Related papers

Contributing institutions

Related topics

Your Article Library

Computers: essay on the importance of computer in the modern society.

computer in 21st century essay

ADVERTISEMENTS:

Read this comprehensive essay on the Importance of Computer in the Modern Society !

As the world progresses on in this never ending chase for a time and wealth, it is undeniable that science has made astounding developments.

Computers

Image Courtesy : upload.wikimedia.org/wikipedia/commons/thumb/f/fb//05.jpg

As the 21st century looms ahead, it is clear to see that it has advancements that humanity may never have dreamed of and one of these shining developments is the well-recognized computer. Having the Latin meaning of ‘computing’ or ‘reckoning’ the computer is an invention that was called the ‘MAN OF THE YEAR’ in a survey carried out by an international magazine.

The computer system is not a simple machine. It is like a very modern and highly complex calculator. It can do all the functions at a speedy rate and also helps us to search and progress in our homes and businesses. A computer can therefore be called a calculator with a twist for not only does it perform fast calculations, but it also has other special characteristics. The computer has thoroughly changed the way we witness things, with its special auto correcting tools, which work with all languages, all logic and all subjects.

There was a time when computers were only heard of as a luxury. However today they are an unavoidable part of success and development. No longer are they owned only through theft and by the filthy rich, in fact computers are and will in the coming days and months be used to accomplish the brilliant goals of success and unparalleled development. For example, in India, the accurate knowledge and use of computers will bring change in a big and astonishing way. It will lead to the demolition of illiteracy, and lead to optimism, efficiency, productivity and high quality.

Even now in our day to day lives, computers have been allotted an integral role to play. They can be seen being used not only at the office or at home, but in all kinds of sectors and businesses. They are used at airports, restaurants, railway stations, banks etc. slowly and gradually, as computers are penetrating through the modern society, people are getting more and more optimistic about the promises its invention made. They are also used in the government sectors, businesses and industry, and through witnessing the rapid progress of the computer; mankind slowly sees the lights it has brought along.

One of the best things about the computer is the fact that it can help us to save so much of manual power, cost, and time. By the use of a computer, tasks can be done automatically and that will lead to saving the countless hours that may otherwise have been spent on doing the job manually.

Computers also ensure more accuracy. Examples of such cases include ticket booking, payment of bills, insurance and shopping. Interestingly, automatic operations of vehicles, like trains also help to ensure further safety and reliability of the journey. Computers can be used to observe and predict traffic patterns which would be a grand benefit to all and would save the hassle of getting stuck for hours in the roadblocks and traffics.

Computers can also drastically change the way agricultural tasks and businesses are carried out all over the world. With regard to agriculture, computers are being used to find out the best possible kinds of soil, plants and to check which match of these would result in the perfect crops. Use of computers thus in this sector along with the use of better agricultural practices and products in several countries, like India, could help the agricultural industry reach soaring heights, directly assuring the welfare of the economy.

It is also wonderful to see that the invention of this unbelievable machine has brought a ray of hope in the darkness of the sick citizens’ world. Computers are very capable of bringing along a medical revolution. Where in health sectors computers are being used for research regarding blood groups, medical histories, etc. and helping to improve medicine in a big way. The knowledge that computers are providing in this field may lead to better use and purchase of medicinal drugs and ensure better health. This also leads to a better diagnosing pattern and makes health care faster and more efficiently.

Although computers are bringing the evolution of technology and changing the way lives are lived, it cannot be denied that there are areas where the impacts of the computer system are not fully recognized yet. For instance if we take the education sector, the literacy rates have not been improved by computers the way other sectors have seemed to have gotten better over night.

The fact remains that 64% of our population remains to date illiterate, and it will be a revolutionary act if computers were made the full use of and worked with to spread educational awareness, in all areas, especially the underprivileged sector. They can be used to plan out lessons, and lessons can be taught on the computers too, the benefit of the prospect lying in the fact that computers excel at lots of different things altogether, which means they can be used to teach not only limited subjects but be used to spread education with reference to all kinds, including text, numbers and graphics.

Perhaps one may think the horrendous thought that computers may take the teacher’s place in the classroom, but we must look at the prospect with the brighter side. No longer will the teacher remain a person who only fits data into a pupil’s mind; and once again become that one supreme authority who inculcates both philosophical and spiritual education amongst his or her students, rising in esteem and role play.

The advantage of computers can also be seen in the fact that they might just be able to improve administration through the world. By providing daily accurate information to the administration departments, computers may change the way decisions are taken across the globe.Keeping all the above mentioned things in mind, we must accept that if used the right way, computers are a gift of science to mankind.

Related Articles:

  • Computers: Essay on Computers (992 Words)
  • Personal Computers: Important Features of Modern Operating System for Personal Computers

No comments yet.

Leave a reply click here to cancel reply..

You must be logged in to post a comment.

web statistics

September 1, 1991

The Computer for the 21st Century

Specialized elements of hardware and software, connected by wires, radio waves and infrared, will be so ubiquitous that no one will notice their presence

By Mark Weiser

Issue Cover

  • Previous Article
  • Next Article

Promises and Pitfalls of Technology

Politics and privacy, private-sector influence and big tech, state competition and conflict, author biography, how is technology changing the world, and how should the world change technology.

[email protected]

  • Split-Screen
  • Article contents
  • Figures & tables
  • Supplementary Data
  • Peer Review
  • Open the PDF for in another window
  • Guest Access
  • Get Permissions
  • Cite Icon Cite
  • Search Site

Josephine Wolff; How Is Technology Changing the World, and How Should the World Change Technology?. Global Perspectives 1 February 2021; 2 (1): 27353. doi: https://doi.org/10.1525/gp.2021.27353

Download citation file:

  • Ris (Zotero)
  • Reference Manager

Technologies are becoming increasingly complicated and increasingly interconnected. Cars, airplanes, medical devices, financial transactions, and electricity systems all rely on more computer software than they ever have before, making them seem both harder to understand and, in some cases, harder to control. Government and corporate surveillance of individuals and information processing relies largely on digital technologies and artificial intelligence, and therefore involves less human-to-human contact than ever before and more opportunities for biases to be embedded and codified in our technological systems in ways we may not even be able to identify or recognize. Bioengineering advances are opening up new terrain for challenging philosophical, political, and economic questions regarding human-natural relations. Additionally, the management of these large and small devices and systems is increasingly done through the cloud, so that control over them is both very remote and removed from direct human or social control. The study of how to make technologies like artificial intelligence or the Internet of Things “explainable” has become its own area of research because it is so difficult to understand how they work or what is at fault when something goes wrong (Gunning and Aha 2019) .

This growing complexity makes it more difficult than ever—and more imperative than ever—for scholars to probe how technological advancements are altering life around the world in both positive and negative ways and what social, political, and legal tools are needed to help shape the development and design of technology in beneficial directions. This can seem like an impossible task in light of the rapid pace of technological change and the sense that its continued advancement is inevitable, but many countries around the world are only just beginning to take significant steps toward regulating computer technologies and are still in the process of radically rethinking the rules governing global data flows and exchange of technology across borders.

These are exciting times not just for technological development but also for technology policy—our technologies may be more advanced and complicated than ever but so, too, are our understandings of how they can best be leveraged, protected, and even constrained. The structures of technological systems as determined largely by government and institutional policies and those structures have tremendous implications for social organization and agency, ranging from open source, open systems that are highly distributed and decentralized, to those that are tightly controlled and closed, structured according to stricter and more hierarchical models. And just as our understanding of the governance of technology is developing in new and interesting ways, so, too, is our understanding of the social, cultural, environmental, and political dimensions of emerging technologies. We are realizing both the challenges and the importance of mapping out the full range of ways that technology is changing our society, what we want those changes to look like, and what tools we have to try to influence and guide those shifts.

Technology can be a source of tremendous optimism. It can help overcome some of the greatest challenges our society faces, including climate change, famine, and disease. For those who believe in the power of innovation and the promise of creative destruction to advance economic development and lead to better quality of life, technology is a vital economic driver (Schumpeter 1942) . But it can also be a tool of tremendous fear and oppression, embedding biases in automated decision-making processes and information-processing algorithms, exacerbating economic and social inequalities within and between countries to a staggering degree, or creating new weapons and avenues for attack unlike any we have had to face in the past. Scholars have even contended that the emergence of the term technology in the nineteenth and twentieth centuries marked a shift from viewing individual pieces of machinery as a means to achieving political and social progress to the more dangerous, or hazardous, view that larger-scale, more complex technological systems were a semiautonomous form of progress in and of themselves (Marx 2010) . More recently, technologists have sharply criticized what they view as a wave of new Luddites, people intent on slowing the development of technology and turning back the clock on innovation as a means of mitigating the societal impacts of technological change (Marlowe 1970) .

At the heart of fights over new technologies and their resulting global changes are often two conflicting visions of technology: a fundamentally optimistic one that believes humans use it as a tool to achieve greater goals, and a fundamentally pessimistic one that holds that technological systems have reached a point beyond our control. Technology philosophers have argued that neither of these views is wholly accurate and that a purely optimistic or pessimistic view of technology is insufficient to capture the nuances and complexity of our relationship to technology (Oberdiek and Tiles 1995) . Understanding technology and how we can make better decisions about designing, deploying, and refining it requires capturing that nuance and complexity through in-depth analysis of the impacts of different technological advancements and the ways they have played out in all their complicated and controversial messiness across the world.

These impacts are often unpredictable as technologies are adopted in new contexts and come to be used in ways that sometimes diverge significantly from the use cases envisioned by their designers. The internet, designed to help transmit information between computer networks, became a crucial vehicle for commerce, introducing unexpected avenues for crime and financial fraud. Social media platforms like Facebook and Twitter, designed to connect friends and families through sharing photographs and life updates, became focal points of election controversies and political influence. Cryptocurrencies, originally intended as a means of decentralized digital cash, have become a significant environmental hazard as more and more computing resources are devoted to mining these forms of virtual money. One of the crucial challenges in this area is therefore recognizing, documenting, and even anticipating some of these unexpected consequences and providing mechanisms to technologists for how to think through the impacts of their work, as well as possible other paths to different outcomes (Verbeek 2006) . And just as technological innovations can cause unexpected harm, they can also bring about extraordinary benefits—new vaccines and medicines to address global pandemics and save thousands of lives, new sources of energy that can drastically reduce emissions and help combat climate change, new modes of education that can reach people who would otherwise have no access to schooling. Regulating technology therefore requires a careful balance of mitigating risks without overly restricting potentially beneficial innovations.

Nations around the world have taken very different approaches to governing emerging technologies and have adopted a range of different technologies themselves in pursuit of more modern governance structures and processes (Braman 2009) . In Europe, the precautionary principle has guided much more anticipatory regulation aimed at addressing the risks presented by technologies even before they are fully realized. For instance, the European Union’s General Data Protection Regulation focuses on the responsibilities of data controllers and processors to provide individuals with access to their data and information about how that data is being used not just as a means of addressing existing security and privacy threats, such as data breaches, but also to protect against future developments and uses of that data for artificial intelligence and automated decision-making purposes. In Germany, Technische Überwachungsvereine, or TÜVs, perform regular tests and inspections of technological systems to assess and minimize risks over time, as the tech landscape evolves. In the United States, by contrast, there is much greater reliance on litigation and liability regimes to address safety and security failings after-the-fact. These different approaches reflect not just the different legal and regulatory mechanisms and philosophies of different nations but also the different ways those nations prioritize rapid development of the technology industry versus safety, security, and individual control. Typically, governance innovations move much more slowly than technological innovations, and regulations can lag years, or even decades, behind the technologies they aim to govern.

In addition to this varied set of national regulatory approaches, a variety of international and nongovernmental organizations also contribute to the process of developing standards, rules, and norms for new technologies, including the International Organization for Standardization­ and the International Telecommunication Union. These multilateral and NGO actors play an especially important role in trying to define appropriate boundaries for the use of new technologies by governments as instruments of control for the state.

At the same time that policymakers are under scrutiny both for their decisions about how to regulate technology as well as their decisions about how and when to adopt technologies like facial recognition themselves, technology firms and designers have also come under increasing criticism. Growing recognition that the design of technologies can have far-reaching social and political implications means that there is more pressure on technologists to take into consideration the consequences of their decisions early on in the design process (Vincenti 1993; Winner 1980) . The question of how technologists should incorporate these social dimensions into their design and development processes is an old one, and debate on these issues dates back to the 1970s, but it remains an urgent and often overlooked part of the puzzle because so many of the supposedly systematic mechanisms for assessing the impacts of new technologies in both the private and public sectors are primarily bureaucratic, symbolic processes rather than carrying any real weight or influence.

Technologists are often ill-equipped or unwilling to respond to the sorts of social problems that their creations have—often unwittingly—exacerbated, and instead point to governments and lawmakers to address those problems (Zuckerberg 2019) . But governments often have few incentives to engage in this area. This is because setting clear standards and rules for an ever-evolving technological landscape can be extremely challenging, because enforcement of those rules can be a significant undertaking requiring considerable expertise, and because the tech sector is a major source of jobs and revenue for many countries that may fear losing those benefits if they constrain companies too much. This indicates not just a need for clearer incentives and better policies for both private- and public-sector entities but also a need for new mechanisms whereby the technology development and design process can be influenced and assessed by people with a wider range of experiences and expertise. If we want technologies to be designed with an eye to their impacts, who is responsible for predicting, measuring, and mitigating those impacts throughout the design process? Involving policymakers in that process in a more meaningful way will also require training them to have the analytic and technical capacity to more fully engage with technologists and understand more fully the implications of their decisions.

At the same time that tech companies seem unwilling or unable to rein in their creations, many also fear they wield too much power, in some cases all but replacing governments and international organizations in their ability to make decisions that affect millions of people worldwide and control access to information, platforms, and audiences (Kilovaty 2020) . Regulators around the world have begun considering whether some of these companies have become so powerful that they violate the tenets of antitrust laws, but it can be difficult for governments to identify exactly what those violations are, especially in the context of an industry where the largest players often provide their customers with free services. And the platforms and services developed by tech companies are often wielded most powerfully and dangerously not directly by their private-sector creators and operators but instead by states themselves for widespread misinformation campaigns that serve political purposes (Nye 2018) .

Since the largest private entities in the tech sector operate in many countries, they are often better poised to implement global changes to the technological ecosystem than individual states or regulatory bodies, creating new challenges to existing governance structures and hierarchies. Just as it can be challenging to provide oversight for government use of technologies, so, too, oversight of the biggest tech companies, which have more resources, reach, and power than many nations, can prove to be a daunting task. The rise of network forms of organization and the growing gig economy have added to these challenges, making it even harder for regulators to fully address the breadth of these companies’ operations (Powell 1990) . The private-public partnerships that have emerged around energy, transportation, medical, and cyber technologies further complicate this picture, blurring the line between the public and private sectors and raising critical questions about the role of each in providing critical infrastructure, health care, and security. How can and should private tech companies operating in these different sectors be governed, and what types of influence do they exert over regulators? How feasible are different policy proposals aimed at technological innovation, and what potential unintended consequences might they have?

Conflict between countries has also spilled over significantly into the private sector in recent years, most notably in the case of tensions between the United States and China over which technologies developed in each country will be permitted by the other and which will be purchased by other customers, outside those two countries. Countries competing to develop the best technology is not a new phenomenon, but the current conflicts have major international ramifications and will influence the infrastructure that is installed and used around the world for years to come. Untangling the different factors that feed into these tussles as well as whom they benefit and whom they leave at a disadvantage is crucial for understanding how governments can most effectively foster technological innovation and invention domestically as well as the global consequences of those efforts. As much of the world is forced to choose between buying technology from the United States or from China, how should we understand the long-term impacts of those choices and the options available to people in countries without robust domestic tech industries? Does the global spread of technologies help fuel further innovation in countries with smaller tech markets, or does it reinforce the dominance of the states that are already most prominent in this sector? How can research universities maintain global collaborations and research communities in light of these national competitions, and what role does government research and development spending play in fostering innovation within its own borders and worldwide? How should intellectual property protections evolve to meet the demands of the technology industry, and how can those protections be enforced globally?

These conflicts between countries sometimes appear to challenge the feasibility of truly global technologies and networks that operate across all countries through standardized protocols and design features. Organizations like the International Organization for Standardization, the World Intellectual Property Organization, the United Nations Industrial Development Organization, and many others have tried to harmonize these policies and protocols across different countries for years, but have met with limited success when it comes to resolving the issues of greatest tension and disagreement among nations. For technology to operate in a global environment, there is a need for a much greater degree of coordination among countries and the development of common standards and norms, but governments continue to struggle to agree not just on those norms themselves but even the appropriate venue and processes for developing them. Without greater global cooperation, is it possible to maintain a global network like the internet or to promote the spread of new technologies around the world to address challenges of sustainability? What might help incentivize that cooperation moving forward, and what could new structures and process for governance of global technologies look like? Why has the tech industry’s self-regulation culture persisted? Do the same traditional drivers for public policy, such as politics of harmonization and path dependency in policy-making, still sufficiently explain policy outcomes in this space? As new technologies and their applications spread across the globe in uneven ways, how and when do they create forces of change from unexpected places?

These are some of the questions that we hope to address in the Technology and Global Change section through articles that tackle new dimensions of the global landscape of designing, developing, deploying, and assessing new technologies to address major challenges the world faces. Understanding these processes requires synthesizing knowledge from a range of different fields, including sociology, political science, economics, and history, as well as technical fields such as engineering, climate science, and computer science. A crucial part of understanding how technology has created global change and, in turn, how global changes have influenced the development of new technologies is understanding the technologies themselves in all their richness and complexity—how they work, the limits of what they can do, what they were designed to do, how they are actually used. Just as technologies themselves are becoming more complicated, so are their embeddings and relationships to the larger social, political, and legal contexts in which they exist. Scholars across all disciplines are encouraged to join us in untangling those complexities.

Josephine Wolff is an associate professor of cybersecurity policy at the Fletcher School of Law and Diplomacy at Tufts University. Her book You’ll See This Message When It Is Too Late: The Legal and Economic Aftermath of Cybersecurity Breaches was published by MIT Press in 2018.

Recipient(s) will receive an email with a link to 'How Is Technology Changing the World, and How Should the World Change Technology?' and will not need an account to access the content.

Subject: How Is Technology Changing the World, and How Should the World Change Technology?

(Optional message may have a maximum of 1000 characters.)

Citing articles via

Email alerts, affiliations.

  • Special Collections
  • Review Symposia
  • Info for Authors
  • Info for Librarians
  • Editorial Team
  • Emerging Scholars Forum
  • Open Access
  • Online ISSN 2575-7350
  • Copyright © 2024 The Regents of the University of California. All Rights Reserved.

Stay Informed

Disciplines.

  • Ancient World
  • Anthropology
  • Communication
  • Criminology & Criminal Justice
  • Film & Media Studies
  • Food & Wine
  • Browse All Disciplines
  • Browse All Courses
  • Book Authors
  • Booksellers
  • Instructions
  • Journal Authors
  • Journal Editors
  • Media & Journalists
  • Planned Giving

About UC Press

  • Press Releases
  • Seasonal Catalog
  • Acquisitions Editors
  • Customer Service
  • Exam/Desk Requests
  • Media Inquiries
  • Print-Disability
  • Rights & Permissions
  • UC Press Foundation
  • © Copyright 2024 by the Regents of the University of California. All rights reserved. Privacy policy    Accessibility

This Feature Is Available To Subscribers Only

Sign In or Create an Account

Book cover

Sensing and Systems in Pervasive Computing pp 3–15 Cite as

Introduction: The Computer for the 21st Century

  • Dan Chalmers 2  

1548 Accesses

Part of the book series: Undergraduate Topics in Computer Science ((UTICS))

In this chapter we introduce the vision of pervasive computing, some “classic” applications and core challenges in computing.

  • Sensor Network
  • Ubiquitous Computing
  • Pervasive Computing
  • Smart Space

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

This is a preview of subscription content, log in via an institution .

Buying options

  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

http://www.geocaching.com/ .

http://www.openstreetmap.org/ .

http://www.locative-media.org/ .

http://www.geograph.org.uk/ .

http://www.phidgets.com/ .

Abowd, G.D., Mynatt, E.D.: Charting past, present, and future research in ubiquitous computing. ACM Trans. Comput.-Hum. Interact. 7 (1), 29–58 (2000)

Article   Google Scholar  

Abowd, G.D., Atkeson, C.G., Hong, J.I., Long, S., Kooper, R., Pinkerton, M.: Cyberguide: a mobile context-aware tour guide. Wirel. Netw. 3 (5), 421–433 (1997)

Bassoli, A., Brewer, J., Martin, K., Dourish, P., Mainwaring, S.: Underground aesthetics: rethinking urban computing. IEEE Pervasive Comput. 6 (3), 39–45 (2007)

Baumann, S., Jung, B., Bassoli, A., Wisniowski, M.: Bluetuna: let your neighbour know what music you like. In: CHI ’07: CHI ’07 Extended Abstracts on Human Factors in Computing Systems, pp. 1941–1946. ACM, New York (2007)

Chapter   Google Scholar  

Beigl, M., Gellersen, H.-W., Schmidt, A.: Mediacups: experience with design and use of computer-augmented everyday artefacts. Comput. Netw. 35 (4), 401–409 (2001)

Bellotti, F., Berta, R., De Gloria, A., Margarone, M.: IEEE pervasive computing: integrated environments—user testing a hypermedia tour guide. IEEE Distrib. Syst. Online 3(6) (2002)

Google Scholar  

Benford, S., Crabtree, A., Flintham, M., Drozd, A., Anastasi, R., Paxton, M., Tandavanitj, N., Adams, M., Row-Far, J.: Can you see me now? ACM Trans. Comput.-Hum. Interact. 13 (1), 100–133 (2006)

Björk, S., Holopainen, J., Ljungstrand, P., Mandryk, R.: Special issue on ubiquitous games. Pers. Ubiquitous Comput. 6 (5–6), 358–361 (2002)

Brand, S.: How Buildings Learn: What Happens After They’re Built. Viking Press, New York (1994)

Chalmers, D., Sloman, M.: A survey of quality of service in mobile computing environments. IEEE Commun. Surv. 2 , 2–10 (1999)

Chalmers, D., Chalmers, M., Crowcroft, J., Kwiatkowska, M., Milner, R., O’Neill, E., Rodden, T., Sassone, V., Sloman, M.: Ubiquitous computing: experience, design and science. Technical report, UKCRC Grand Challenge in Ubiquitous Computing Committee (2006)

Cook, D.J., Das, S.K. (eds.): Smart Environments. Wiley, New York (2005)

Davies, N., Gellersen, H.-W.: Beyond prototypes: challenges in deploying ubiquitous systems. IEEE Pervasive Comput. 1 (1), 26–35 (2002)

Davies, N., Cheverst, K., Mitchell, K., Efrat, A.: Using and determining location in a context-sensitive tour guide. IEEE Comput. 34 (8), 35–41 (2001)

Duchamp, D.: Issues in wireless mobile computing. In: 3rd IEEE Workshop on Workstation Operating Systems (1992)

Forman, G.H., Zahorjan, J.: The challenges of mobile computing. IEEE Comput. 27 (4), 38–47 (1994)

Gibson, W.: Spook Country. Penguin, Baltimore (2007)

Harle, R.K., Hopper, A.: Deploying and evaluating a location-aware system. In: Shin, K.G., Kotz, D., Noble, B.D. (eds.) MobiSys, pp. 219–232. ACM, New York (2005)

Harter, A., Hopper, A., Steggles, P., Ward, A., Webster, P.: The anatomy of a context-aware application. Wirel. Netw. 8 (2–3), 187–197 (2002)

Article   MATH   Google Scholar  

Imielinski, T., Badrinath, B.R.: Mobile wireless computing: challenges in data management. Commun. ACM 37 (10), 18–28 (1994)

MacColl, I., Chalmers, M., Rogers, Y., Smith, H.: Seamful ubiquity: beyond seamless integration. In: Proc. Ubicomp 2002 Workshop on Models and Concepts for Ubiquitous Computing (2002)

Mainwaring, A.M., Culler, D.E., Polastre, J., Szewczyk, R., Anderson, J.: Wireless sensor networks for habitat monitoring. In: Raghavendra, C.S., Sivalingam, K.M. (eds.) WSNA, pp. 88–97. ACM, New York (2002)

Martinez, K., Hart, J.K., Ong, R.: Environmental sensor networks. IEEE Comput. 37 (8), 50–56 (2004)

Mayrhofer, R., Gellersen, H.: Shake well before use: authentication based on accelerometer data. In: LaMarca, A., Langheinrich, M., Truong, K.N. (eds.) Pervasive. Lecture Notes in Computer Science, vol. 4480, pp. 144–161. Springer, Berlin (2007)

McNamara, L., Mascolo, C., Capra, L.: Media sharing based on colocation prediction in urban transport. In: MobiCom ’08: Proceedings of the 14th ACM International Conference on Mobile Computing and Networking, pp. 58–69. ACM, New York (2008)

Satyanarayanan, M.: Pervasive computing: vision and challenges. IEEE Pers. Commun. 8 (4), 10–17 (2001)

Szewczyk, R., Osterweil, E., Polastre, J., Hamilton, M., Mainwaring, A.M., Estrin, D.: Habitat monitoring with sensor networks. Commun. ACM 47 (6), 34–40 (2004)

Tolle, G., Polastre, J., Szewczyk, R., Culler, D.E., Turner, N., Tu, K., Burgess, S., Dawson, T., Buonadonna, P., Gay, D., Hong, W.: A macroscope in the redwoods. In: Redi, J., Balakrishnan, H., Zhao, F. (eds.) SenSys, pp. 51–63. ACM, New York (2005)

Want, R., Hopper, A., Falcao, V., Gibbons, J.: The active badge location system. ACM Trans. Inf. Syst. 10 (1), 91–102 (1992)

Want, R., Borriello, G., Pering, T., Farkas, K.I.: Disappearing hardware. IEEE Pervasive Comput. 01 (1), 36–47 (2002)

Weiser, M.: The computer for the 21st century. Sci. Am. 3 , 94–104 (1991)

Weiser, M.: Some computer science issues in ubiquitous computing. Commun. ACM 36 (7), 75–84 (1993)

Download references

Author information

Authors and affiliations.

University of Sussex, Brighton, United Kingdom

Dan Chalmers

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Dan Chalmers .

Rights and permissions

Reprints and permissions

Copyright information

© 2011 Springer-Verlag London Limited

About this chapter

Cite this chapter.

Chalmers, D. (2011). Introduction: The Computer for the 21st Century . In: Sensing and Systems in Pervasive Computing. Undergraduate Topics in Computer Science. Springer, London. https://doi.org/10.1007/978-0-85729-841-6_1

Download citation

DOI : https://doi.org/10.1007/978-0-85729-841-6_1

Publisher Name : Springer, London

Print ISBN : 978-0-85729-840-9

Online ISBN : 978-0-85729-841-6

eBook Packages : Computer Science Computer Science (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Springer Nature - PMC COVID-19 Collection

Logo of phenaturepg

Information technologies of 21st century and their impact on the society

Mohammad yamin.

Department of MIS, Faculty of Economics and Admin, King Abdulaziz University, Jeddah, Saudi Arabia

Twenty first century has witnessed emergence of some ground breaking information technologies that have revolutionised our way of life. The revolution began late in 20th century with the arrival of internet in 1995, which has given rise to methods, tools and gadgets having astonishing applications in all academic disciplines and business sectors. In this article we shall provide a design of a ‘spider robot’ which may be used for efficient cleaning of deadly viruses. In addition, we shall examine some of the emerging technologies which are causing remarkable breakthroughs and improvements which were inconceivable earlier. In particular we shall look at the technologies and tools associated with the Internet of Things (IoT), Blockchain, Artificial Intelligence, Sensor Networks and Social Media. We shall analyse capabilities and business value of these technologies and tools. As we recognise, most technologies, after completing their commercial journey, are utilised by the business world in physical as well as in the virtual marketing environments. We shall also look at the social impact of some of these technologies and tools.

Introduction

Internet, which was started in 1989 [ 1 ], now has 1.2 million terabyte data from Google, Amazon, Microsoft and Facebook [ 2 ]. It is estimated that the internet contains over four and a half billion websites on the surface web, the deep web, which we know very little about, is at least four hundred times bigger than the surface web [ 3 ]. Soon afterwards in 1990, email platform emerged and then many applications. Then we saw a chain of web 2.0 technologies like E-commerce, which started, social media platforms, E-Business, E-Learning, E-government, Cloud Computing and more from 1995 to the early 21st century [ 4 ]. Now we have a large number of internet based technologies which have uncountable applications in many domains including business, science and engineering, and healthcare [ 5 ]. The impact of these technologies on our personal lives is such that we are compelled to adopt many of them whether we like it or not.

In this article we shall study the nature, usage and capabilities of the emerging and future technologies. Some of these technologies are Big Data Analytics, Internet of Things (IoT), Sensor networks (RFID, Location based Services), Artificial Intelligence (AI), Robotics, Blockchain, Mobile digital Platforms (Digital Streets, towns and villages), Clouds (Fog and Dew) computing, Social Networks and Business, Virtual reality.

With the ever increasing computing power and declining costs of data storage, many government and private organizations are gathering enormous amounts of data. Accumulated data from the years’ of acquisition and processing in many organizations has become enormous meaning that it can no longer be analyzed by traditional tools within a reasonable time. Familiar disciplines to create Big data include astronomy, atmospheric science, biology, genomics, nuclear physics, biochemical experiments, medical records, and scientific research. Some of the organizations responsible to create enormous data are Google, Facebook, YouTube, hospitals, proceedings of parliaments, courts, newspapers and magazines, and government departments. Because of its size, analysis of big data is not a straightforward task and often requires advanced methods and techniques. Lack of timely analysis of big data in certain domains may have devastating results and pose threats to societies, nature and echo system.

Big medic data

Healthcare field is generating big data, which has the potential to surpass other fields when it come to the growth of data. Big Medic data usually refers to considerably bigger pool of health, hospital and treatment records, medical claims of administrative nature, and data from clinical trials, smartphone applications, wearable devices such as RFID and heart beat reading devices, different kinds of social media, and omics-research. In particular omics-research (genomics, proteomics, metabolomics etc.) is leading the charge to the growth of Big data [ 6 , 7 ]. The challenges in omics-research are data cleaning, normalization, biomolecule identification, data dimensionality reduction, biological contextualization, statistical validation, data storage and handling, sharing and data archiving. Data analytics requirements include several tasks like those of data cleaning, normalization, biomolecule identification, data dimensionality reduction, biological contextualization, statistical validation, data storage and handling, sharing and data archiving. These tasks are required for the Big data in some of the omics datasets like genomics, tran-scriptomics, proteomics, metabolomics, metagenomics, phenomics [ 6 ].

According to [ 8 ], in 2011 alone, the data in the United States of America healthcare system amounted to one hundred and fifty Exabyte (One Exabyte = One billion Gigabytes, or 10 18  Bytes), and is expected soon reach to 10 21 and later 10 24 . Some scientist have classified Medical into three categories having (a) large number of samples but small number of parameters; (b) small number of samples and small number of parameters; (c) large small number of samples and small number of parameters [ 9 ]. Although the data in the first category may be analyzed by classical methods but it may be incomplete, noisy, and inconsistent, data cleaning. The data in the third category could be big and may require advanced analytics.

Big data analytics

Big data cannot be analyzed in real time by traditional analytical methods. The analysis of Big data, popularly known as Big Data Analytics, often involves a number of technologies, sophisticated processes and tools as depicted in Fig.  1 . Big data can provide smart decision making and business intelligence to the businesses and corporations. Big data unless analyzed is impractical and a burden to the organization. Big data analytics involves mining and extracting useful associations (knowledge discovery) for intelligent decision-making and forecasts. The challenges in Big Data analytics are computational complexities, scalability and visualization of data. Consequently, the information security risk increases with the surge in the amount of data, which is the case in Big Data.

An external file that holds a picture, illustration, etc.
Object name is 41870_2019_355_Fig1_HTML.jpg

Big Data Analytics

The aim of data analytics has always been knowledge discovery to support smart and timely decision making. With big data, knowledge base becomes widened and sharper to provide greater business intelligence and assist businesses in becoming a leader in the market. Conventional processing paradigm and architecture are inefficient to deal with the large datasets from the Big data. Some of the problems of Big Data are to deal with the size of data sets in Big Data, requiring parallel processing. Some of the recent technologies like Spark, Hadoop, Map Reduce, R, Data Lakes and NoSQL have emerged to provide Big Data analytics. With all these and other data analytics technologies, it is advantageous to invest in designing superior storage systems.

Health data predominantly consists of visual, graphs, audio and video data. Analysing such data to gain meaningful insights and diagnoses may depend on the choice of tools. Medical data has traditionally been scattered in the organization, often not organized properly. What we find usually are medical record keeping systems which consist of heterogeneous data, requiring more efforts to reorganize the data into a common platform. As discussed before, the health profession produces enormous data and so analysing it in an efficient and timely manner can potentially save many lives.

Commercial operations of Clouds from the company platforms began in the year 1999 [ 10 ]. Initially, clouds complemented and empowered outsourcing. At earlier stages, there were some privacy concerns associated with Cloud Computing as the owners of data had to give the custody of their data to the Cloud owners. However, as time passed, with confidence building measures by Cloud owners, the technology became so prevalent that most of the world’s SMEs started using it in one or the other form. More information on Cloud Computing can be found in [ 11 , 12 ].

Fog computing

As faster processing became the need for some critical applications, the clouds regenerated Fog or Edge computing. As can be seen in Gartner hyper cycles in Figs.  2 and ​ and3, 3 , Edge computing, as an emerging technology, has also peaked in 2017–18. As shown in the Cloud Computing architecture in Fig.  4 , the middle or second layers of the cloud configuration are represented by the Fog computing. For some applications delay in communication between the computing devices in the field and data in a Cloud (often physically apart by thousands of miles), is detrimental of the time requirements, as it may cause considerable delay in time sensitive applications. For example, processing and storage for early warning of disasters (stampedes, Tsunami, etc.) must be in real time. For these kinds of applications, computing and storing resources should be placed closer to where computing is needed (application areas like digital street). In these kind of scenarios Fog computing is considered to be suitable [ 13 ]. Clouds are integral part of many IoT applications and play central role on ubiquitous computing systems in health related cases like the one depicted in Fig.  5 . Some applications of Fog computing can be found in [ 14 – 16 ]. More results on Fog computing are also available in [ 17 – 19 ].

An external file that holds a picture, illustration, etc.
Object name is 41870_2019_355_Fig2_HTML.jpg

Emerging Technologies 2018

An external file that holds a picture, illustration, etc.
Object name is 41870_2019_355_Fig3_HTML.jpg

Emerging Technologies 2017

An external file that holds a picture, illustration, etc.
Object name is 41870_2019_355_Fig4_HTML.jpg

Relationship of Cloud, Fog and Dew computing 

An external file that holds a picture, illustration, etc.
Object name is 41870_2019_355_Fig5_HTML.jpg

Snapshot of a Ubiquitous System

Dew computing

When Fog is overloaded and is not able to cater for the peaks of high demand applications, it offloads some of its data and/or processing to the associated cloud. In such a situation, Fog exposes its dependency to a complementary bottom layer of the cloud architectural organisation as shown in the Cloud architecture of Fig.  4 . This bottom layer of hierarchical resources organization is known as the Dew layer. The purpose of the Dew layer is to cater for the tasks by exploiting resources near to the end-user with minimum internet access [ 17 , 20 ]. As a feature, Dew computing takes care of determining as to when to use for its services linking with the different layers of the Cloud architecture. It is also important to note that the Dew computing [ 20 ] is associated with the distributed computing hierarchy and is integrated by the Fog computing services, which is also evident in Fig.  4 . In summary, Cloud architecture has three layers, first being Cloud, second as Fog and the third Dew.

Internet of things

Definition of Internet of Things (IoT), as depicted in Fig.  6 , has been changing with the passage of time. With growing number of internet based applications, which use many technologies, devices and tools, one would think, the name of IoT seems to have evolved. Accordingly, things (technologies, devices and tools) used together in internet based applications to generate data to provide assistance and services to the users from anywhere, at any time. The internet can be considered as a uniform technology from any location as it provides the same service of ‘connectivity’. The speed and security however are not uniform. The IoT as an emerging technology has peaked during 2017–18 as is evident from Figs.  2 and ​ and3. 3 . This technology is expanding at a very fast rate. According to [ 21 – 24 ], the number of IoT devices could be in millions by the year 2021.

An external file that holds a picture, illustration, etc.
Object name is 41870_2019_355_Fig6_HTML.jpg

Internet of Things

IoT is providing some amazing applications in tandem with wearable devices, sensor networks, Fog computing, and other technologies to improve some the critical facets of our lives like healthcare management, service delivery, and business improvements. Some applications of IoT in the field of crowd management are discussed in [ 14 ]. Some applications in of IoT in the context of privacy and security are discussed in [ 15 , 16 ]. Some of the key devices and associated technologies to IoT include RFID Tags [ 25 ], Internet, computers, cameras, RFID, Mobile Devices, coloured lights, RFIDs, Sensors, Sensor networks, Drones, Cloud, Fog and Dew.

Applications of blockchain

Blockchain is usually associated with Cryptocurrencies like Bitcoin (Currently, there are over one and a half thousand cryptocurrencies and the numbers are still rising). But the Blockchain technology can also be used for many more critical applications of our daily lives. The Blockchain is a distributed ledger technology in the form of a distributed transactional database, secured by cryptography, and governed by a consensus mechanism. A Blockchain is essentially a record of digital events [ 26 ]. A block represents a completed transaction or ledger. Subsequent and prior blocks are chained together, displaying the status of the most recent transaction. The role of chain is to provide linkage between records in a chronological order. This chain continues to grow as and when further transactions take place, which are recorded by adding new blocks to the chain. User security and ledger consistency in the Blockchain is provided by Asymmetric cryptography and distributed consensus algorithms. Once a block is created, it cannot be altered or removed. The technology eliminates the need for having a bank statement for verification of the availability of funds or that of a lawyer for certifying the occurrence of an event. The benefits of Blockchain technology are inherited in its characteristics of decentralization, persistency, anonymity and auditability [ 27 , 28 ].

Blockchain for business use

Blockchain, being the technology behind Cryptocurrencies, started as an open-source Bitcoin community to allow reliable peer-to-peer financial transactions. Blockchain technology has made it possible to build a globally functional currency relying on code, without using any bank or third-party platforms [ 28 ]. These features have made the Blockchain technology, secure and transparent for business transactions of any kind involving any currencies. In literature, we find many applications of Blockchain. Nowadays, the applications of Blockchain technology involve various kinds of transactions requiring verification and automated system of payments using smart contracts. The concept of Smart Contacts [ 28 ] has virtually eliminated the role of intermediaries. This technology is most suitable for businesses requiring high reliability and honesty. Because of its security and transparency features, the technology would benefit businesses trying to attract customers. Blockchain can be used to eliminate the occurrence of fake permits as can be seen in [ 29 ].

Blockchain for healthcare management

As discussed above, Blockchain is an efficient and transparent way of digital record keeping. This feature is highly desirable in efficient healthcare management. Medical field is still undergoing to manage their data efficiently in a digital form. As usual the issues of disparate and non-uniform record storage methods are hampering the digitization, data warehouse and big data analytics, which would allow efficient management and sharing of the data. We learn the magnitude of these problem from examples of such as the target of the National Health Service (NHS) of the United Kingdom to digitize the UK healthcare is by 2023 [ 30 ]. These problems lead to inaccuracies of data which can cause many issues in healthcare management, including clinical and administrative errors.

Use of Blockchain in healthcare can bring revolutionary improvements. For example, smart contracts can be used to make it easier for doctors to access patients’ data from other organisations. The current consent process often involves bureaucratic processes and is far from being simplified or standardised. This adds to many problems to patients and specialists treating them. The cost associated with the transfer of medical records between different locations can be significant, which can virtually be reduced to zero by using Blockchain. More information on the use of Blockchain in the healthcare data can be found in [ 30 , 31 ].

Environment cleaning robot

One of the ongoing healthcare issue is the eradication of deadly viruses and bacteria from hospitals and healthcare units. Nosocomial infections are a common problem for hospitals and currently they are treated using various techniques [ 32 , 33 ]. Historically, cleaning the hospital wards and operating rooms with chlorine has been an effective way. On the face of some deadly viruses like EBOLA, HIV Aids, Swine Influenza H1N1, H1N2, various strands of flu, Severe Acute Respiratory Syndrome (SARS) and Middle Eastern Respiratory Syndrome (MERS), there are dangerous implications of using this method [ 14 ]. An advanced approach is being used in the USA hospitals, which employs “robots” to purify the space as can be seen in [ 32 , 33 ]. However, certain problems exist within the limitations of the current “robots”. Most of these devices require a human to place them in the infected areas. These devices cannot move effectively (they just revolve around themselves); hence, the UV light will not reach all areas but only a very limited area within the range of the UV light emitter. Finally, the robot itself maybe infected as the light does not reach most of the robot’s surfaces. Therefore, there is an emerging need to build a robot that would not require the physical presence of humans to handle it, and could purify the entire room by covering all the room surfaces with UV light while, at the same time, will not be infected itself.

Figure  7 is an overview of the design of a fully motorized spider robot with six legs. This robot supports Wi-Fi connectivity for the purpose of control and be able to move around the room and clean the entire area. The spider design will allow the robot to move in any surface, including climbing steps but most importantly the robot will use its legs to move the UV light emitter as well as clear its body before leaving the room. This substantially reduces the risk of the robot transmitting any infections.

An external file that holds a picture, illustration, etc.
Object name is 41870_2019_355_Fig7_HTML.jpg

Spider Robot for virus cleaning

Additionally, the robot will be equipped with a motorized camera allowing the operator to monitor space and stop the process of emitting UV light in case of unpredicted situations. The operator can control the robot via a networked graphical user interface and/or from an augmented reality environment which will utilize technologies such as the Oculus Touch. In more detail, the user will use the oculus rift virtual reality helmet and the oculus touch, as well as hand controllers to remote control the robot. This will provide the user with the vision of the robot in a natural manner. It will also allow the user to control the two front robotic arms of the spider robot via the oculus touch controller, making it easy to do conduct advance movements, simply by move the hands. The physical movements of the human hand will be captured by the sensors of oculus touch and transmitted to the robot. The robot will then use reverse kinematics to translate the actions and position of the human hand to movements of the robotic arm. This technique will also be used during the training phase of the robot, where the human user will teach the robot how to clean various surfaces and then purify itself, simply by moving their hands accordingly. The design of the spider robot was proposed in a project proposal submitted to the King Abdulaziz City of Science and Technology ( https://www.kacst.edu.sa/eng/Pages/default.aspx ) by the author and George Tsaramirsis ( https://www.researchgate.net/profile/George_Tsaramirsis ).

Conclusions

We have presented details of some of the emerging technologies and real life application, that are providing businesses remarkable opportunities, which were previously unthinkable. Businesses are continuously trying to increase the use of new technologies and tools to improve processes, to benefit their client. The IoT and associated technologies are now able to provide real time and ubiquitous processing to eliminate the need for human surveillance. Similarly, Virtual Reality, Artificial Intelligence robotics are having some remarkable applications in the field of medical surgeries. As discussed, with the help of the technology, we now can predict and mitigate some natural disasters such as stampedes with the help of sensor networks and other associated technologies. Finally, the increase in Big Data Analytics is influencing businesses and government agencies with smarter decision making to achieve targets or expectations.

Jump to navigation

[CFP] 2024 Situations International Conference: Asian Diaspora in the 21st Century

Call for Papers

2024 Situations International Conference

22-23 November, 2024

Yonsei University, Seoul, South Korea

Asian Diaspora in the 21st Century:

Transnational Hauntology and Affective Production

Scholars have extensively debated the meaning and significance of diaspora . At their inception, classic diaspora studies considered a racial or ethnic group’s dispersal caused by religious difference, the Jewish people being the archetype in understanding diaspora. The scope of modern diaspora studies has been expanded to embrace emancipatory politics and the exploration of various conditions of racial, ethnic, and political minorities. Contemporary diaspora is characterized by fragmentation, dislocation, and globalization, and these new features must be clearly redefined and analyzed. Non-European diaspora experience, Asian diaspora in particular, has not been extensively explored. Raising questions about the magnitude and the limited destinations of Korean migration being identified as a diaspora, Gerard Chaliand and Jean Pierre Rageau argue that “the total number of overseas Koreans lacks the massive proportions of a typical diaspora, such as the Irish case, in which more than half of the population emigrated from their homeland” (1995). Should we define an ethnic group’s diaspora through size or distance? Doesn’t the atypicality of the Korean diaspora call for a retheorization of diaspora today?

A small group of migrants may have felt themselves to be in a precarious situation in the 19th and 20th centuries, but in the 21st century, diasporic subjects have multiple ways of retaining contact with their communities of origin, thanks to advances in communication technology and frequent air travel. Contemporary diasporas in the 21st century can be characterized by varieties of diasporic experience that no longer necessitate a permanent break from one’s homeland. The consciousness of being a diasporic subject may no longer depend as much on a physical and geographic separation from a homeland. What does diasporic consciousness mean then in a world where contact and even return to the homeland is possible? And turning away from the attention of ethnicity or race on diaspora to the emotional experience of being unsettled, displaced, and haunted, may unveil a greater understanding of our being in the 21st century.

Playing on the concept of ontology and resonating with his lifelong project of deconstruction, Jacques Derrida suggests by the term, hauntology, how to engage ghosts and historical remnants from the past. (Hau)ntology is a neologism that reminds us that we are always displaced and unhomed. When diasporic subjects seek to break away from their past, it can always come back to haunt their present experience associated with mixed feelings of melancholia, rage, alienation, anomie, and hopefulness for a better future. The displaced subjects’ affective production transcending the limited ties of kinship and nation can mediate the deterritorialized humanity in the 21st century. Situations (Volume 18, No. 1, 2025) calls for papers that explore concepts of migration and diaspora in the 21st century and/or papers that examine literary and cultural content representing, mediating, or rearticulating the diasporic consciousness of Asian diaspora communities.

Possible topics:

  • Contemporary diasporas: North Korean defectors, the Zainichi community, the Chinese diaspora in Southeast Asia, and the South Asian diaspora in African and Arab states
  • diasporic consciousness: displacement and lost land, homeland and host land
  • language barriers and linguistic isolation
  • citizenship and sense of belonging
  • the myth and politics of return
  • refugee camps, resettlement, and national borders
  • gendered experience within diasporic communities
  • inter-Asian migration and politics of asylum
  • the problem of collective memory in diasporic communities
  • assimilation and de-assimilation in one’s adopted land
  • diaspora and the “blue humanities” centered on oceans and seas

Confirmed Keynote Speakers:

John Lie, Distinguished Professor of Sociology, U.C. Berkeley

So-young Kim, Professor of Cinema Studies, Korea National University of Arts

Early inquiries with 200-word abstracts are appreciated.  By  3 1  August 2024 ,  we would invite you to submit your 4,000-word Chicago-style conference presentation with its abstract and keywords  (the acceptance of the presentation will be decided based on the 4,000-paper).

Each invited participant is then expected to turn his or her conference presentation into a finished 6,000-word paper for possible inclusion in a future issue of the SCOPUS-indexed journal,  Situations: Cultural Studies in the Asian Context . All inquiries and submissions should be sent to both  [email protected]  and [email protected] .

Submissions should follow the Chicago Manual of Style (16th ed.), using only endnotes.

We will pay for the hotel accommodation for those participants whose papers we accept. The presenters will share twin bedrooms.

IMAGES

  1. (PDF) Interacting with 21st-Century Computers

    computer in 21st century essay

  2. Advantages computer education essay

    computer in 21st century essay

  3. (PDF) Essay on the understanding of computer & systems sciences

    computer in 21st century essay

  4. Essay on Importance of Computer in Life for Students

    computer in 21st century essay

  5. The Computer for the 21st Century

    computer in 21st century essay

  6. Essay On History Of Computer Free Essay Example

    computer in 21st century essay

VIDEO

  1. new century computer institute bilari 💯

  2. role of human values in 21st century essay in English

  3. 21वीं सदी का भारत पर निबंध

  4. 21st Century Concerns through Gandhian Solutions #writingclasses #essay #shorts

  5. Halocode introduction video

  6. History of Computer 21st Century

COMMENTS

  1. PDF The Computer for the 21st Century

    The Computer for the 21st Century Mark Weiser The most profound technologies are those that disappear. They weave themselves into the fabric of everyday life until they are indistinguishable from it. Consider writing, perhaps the first information technology: The ability to capture a symbolic representation of

  2. PDF The computer for the 21st century

    Al eady comput-ers in light switches, thermostats, ereos and ovens help to activate the world. These machines and more will beinter-connected in a ubiquitous network. As computer scientists, however, my colleagues nd Ihave focused on devices that transmit and isplay information more directly.

  3. How to think about computers in the 21st century

    Thanks to Moore's Law, computers have been getting exponentially faster and smaller. Their form, however, hasn't changed nearly as quickly. The smartphone takeover of the early 2000s is the most recent revolution in computing but despite a powerful new pocket size, the interaction has barely changed. The input of text on a miniaturized ...

  4. The computer for the 21st century: present security & privacy

    Decades went by since Mark Weiser published his influential work on the computer of the 21st century. Over the years, some of the UbiComp features presented in that paper have been gradually adopted by industry players in the technology market. While this technological evolution resulted in many benefits to our society, it has also posed, along the way, countless challenges that we have yet to ...

  5. The Computer for the 21 st Century

    The idea of integrating computers seamlessly into the world at large runs counter to a number of present-day trends. "Ubiquitous computing" in this context does not mean just computers that can be carried to the beach, jun­ gle or airport. Even the most powerful notebook computer, with access to a worldwide information network, still

  6. PDF Chapter 1 Introduction: The Computer for the 21st Century

    The Computer for the 21st Century 1.1 What Is Pervasive Computing? Pervasive computing as a field of research identifies its basis in an article by Mark Weiser of Xerox PARC [31] in 1991, and a second related paper in 1993 [32], arising ... papers describe the challenges that are presented, including [1, 15, 16, 20, 26, 32].

  7. The Computer for the 21st Century (1991)

    This paper will look at the main points from Weiser's paper, and discuss it's impact on the field of computer science and the authors' current lifestyle. 1 The Computer for the 21st Century (1991) Introduction This paper was written by Mark Weiser in 1991 and is considered to be a seminal paper within the field of ubiquitous computing, where the author introduces a broad and specific vision of ...

  8. Mark Weiser's The Computer for the 21st Century: A Journal Review

    The Invisible Machine In a journal article written by Mark Weiser entitled The Computer for the 21 st Century, he tries to look beyond the role of ubiquitous computing in the future where technological advancements surpass its basic computing functions to a machine that is experienced by its users yet hidden from their lives.

  9. Building skills for life: How to expand and improve computer science

    Given the increasing integration of technology into many aspects of daily life in the 21st century, a functional knowledge of how computers work—beyond the simple use of applications—will help ...

  10. The computer for the 21st century

    Pervasive computing: a paradigm for the 21st century. Debashis Saha, Amitava Mukherjee 1 • Institutions (1) 28 Feb 2003 - IEEE Computer. TL;DR: Pervasive computing is close to technical and economic viability, and a computing environment is an information-enhanced physical space, not a virtual environment that exists to store and run software.

  11. Introduction: The Computer for the 21st Century

    Introduction: The Computer for the 21st Century. Dan Chalmers. Published 2011. Computer Science, Engineering. TLDR. In this chapter, the vision of pervasive computing is introduced, some "classic" applications and core challenges in computing are introduced, and some "classic' applications" are described. Expand.

  12. Computers: Essay on the Importance of Computer in the Modern Society

    Read this comprehensive essay on the Importance of Computer in the Modern Society ! ... As the 21st century looms ahead, it is clear to see that it has advancements that humanity may never have dreamed of and one of these shining developments is the well-recognized computer. Having the Latin meaning of 'computing' or 'reckoning' the ...

  13. The Computer for the 21st Century

    More by Mark Weiser. This article was originally published with the title "The Computer for the 21st Century" in Scientific American Magazine Vol. 265 No. 3 (September 1991), p. 94. doi:10. ...

  14. (PDF) The Computer for the 21st Century: Security ...

    Abstract and Figures. Decades went by since Mark Weiser published his influential work on how a computer of the 21st century would look like. Over the years, some of the UbiComp fea- tures ...

  15. How Is Technology Changing the World, and How Should the World Change

    Technologies are becoming increasingly complicated and increasingly interconnected. Cars, airplanes, medical devices, financial transactions, and electricity systems all rely on more computer software than they ever have before, making them seem both harder to understand and, in some cases, harder to control. Government and corporate surveillance of individuals and information processing ...

  16. Introduction: The Computer for the 21st Century

    Various papers describe the challenges that are presented, including [1, 15, 16, 20, 26, 32]. ... The computer for the 21st century. Scientific American, 3:94-104, September 1991. Mark Weiser. Some computer science issues in ubiquitous computing. Communications of the ACM, 36(7):75-84, 1993.

  17. [PDF] The computer for the 21st Century

    The computer for the 21st Century. M. Weiser. Published in IEEE pervasive computing 1 September 1991. Computer Science, Engineering. TLDR. Specialized elements of hardware and software, connected by wires, radio waves and infrared, will soon be so ubiquitous that no-one will notice their presence. Expand. View on ACM. cs.rutgers.edu.

  18. Information technologies of 21st century and their impact on the

    Introduction. Internet, which was started in 1989 [], now has 1.2 million terabyte data from Google, Amazon, Microsoft and Facebook [].It is estimated that the internet contains over four and a half billion websites on the surface web, the deep web, which we know very little about, is at least four hundred times bigger than the surface web [].Soon afterwards in 1990, email platform emerged and ...

  19. Advancement of Technology Essay

    Technology is all around us and part of our daily lives. We use a variety of technology from cell phones, computers, smart TVs, radio, etc, and in various fields such as healthcare, education, and productivity. Technology has transitioned from 1 year ago to even 5 years ago with advancement from 2G which launched in 1991 to 5G in 2019. WIth the ...

  20. Computer Science : The Most Important Skill Of The 21st Century

    Computer science in nowadays is the most important skill of the 21st century. Nevertheless, it is not widely taught. Many students do not know about computer science. In high school "too many of them are learning English, art history, or ethnic or gender studies, and not enough in technology" (Schulzke). With such a high job growing market ...

  21. The Importance of Digital Literacy in the 21st Century

    The Importance of Digital Literacy in the 21st Century. Topics: Digital Literacy Technology in Education. Words: 1892. Pages: 4. This essay sample was donated by a student to help the academic community. Papers provided by EduBirdie writers usually outdo students' samples.

  22. PDF The Computer for the 21st Century

    The Computer for the 21st Century. The Computer for the 21st Century Specialized elements of hardware and software, connected by wires, radio waves and infrared, will be so ubiquitous that no one will notice their presence he most profound technologies are those that disappear. They weave themselves into the fabric of everyday life until they ...

  23. cfp

    contact email: [email protected]. Call for Papers. 2024 Situations International Conference. 22-23 November, 2024. Yonsei University, Seoul, South Korea. Asian Diaspora in the 21st Century: Transnational Hauntology and Affective Production. Scholars have extensively debated the meaning and significance of diaspora.

  24. Governance in Sub-Saharan Africa in the 21st Century: Four Trends and

    Abstract: What can be learned from the governance trajectory of African countries since the beginning of the 21st century What is the quality of governance on the African continent and how does it shape development The first decade of the millennium saw promising growth and poverty reduction in much of the continent.