Logo UOC

From self-driving cars to smart homes: a thesis explores the future of interconnected machines

From smart refrigerators to digital locks and self-driving cars, the Internet of Things is in a lot of electronic devices  (Sebastian Scholz (Nuki) / unsplash.com)

From smart refrigerators to digital locks and self-driving cars, the Internet of Things is in a lot of electronic devices (Sebastian Scholz (Nuki) / unsplash.com)

The Internet of Things, or IoT, is rapidly spreading across an increasingly interconnected world. As the needs for connection and computing between devices grow, so do the challenges. Ana María Juan Ferrer's thesis, developed in the doctoral programme in Information and Network Technologies at the Universitat Oberta de Catalunya (UOC), analyses the specific challenges posed by IoT Edge devices, furthering the establishment of a theory to help build an autonomous and more interconnected world .

What is the Internet of Things?

When you ask your voice assistant to switch on the television, when you look at your fitness tracker to see your step count or when you receive notification on your smartphone of a temperature increase at home, you are operating within the IoT paradigm. The network of networks, at this point, has become an excellent tool that connects a multitude of everyday devices, expanding the services they offer.

From smart refrigerators to digital locks and self-driving cars, the Internet of Things is in hundreds or thousands of electronic devices. The concept of IoT refers to the grouping and interconnection of devices and objects via a network, such as the Internet, where all of them can be visible and interact with each other.

The Internet of Things is present in our lives, even if we are often unaware of it. Its objective is to generally make our everyday lives easier. But, as we have already mentioned, as interconnection needs grow, so do the requirements in terms of computing and connectivity (the ability to maintain a connection of sufficient quality) necessary to keep up with the services being offered. This is where edge and cloud computing come in .

Edge and cloud computing: the evolution of IoT

In her project, Ad-hoc Formation of Edge Clouds over Heterogeneous Non-dedicated Resources , Ana María Juan Ferrer analyses the challenges presented by IoT Edge Cloud devices. "Cloud computing allows access to computing and storage capacity as a service", explained the researcher about the paradigm that allows computing services to be offered over a network (usually the Internet). "Rather than being purchased, technological infrastructures are rented for the exact time in which they are going to be used" .

Let's travel back in time to the early 2000s and imagine that we need to use a computer but we don't have one at home. We could go to an Internet café and pay for the time we use the computer. We would not be paying for someone to perform the tasks nor buying a computer, but renting the equipment for the time we use the service.

In the same way, cloud computing provides specific computing services to those who need them : "Cloud computing services are provided from large data centres operated by cloud service providers from various regions", explained the researcher.

Edge computing aims to bring these computing services closer to locations in the vicinity of data sources "to avoid the latency problems that have been observed in Internet of Things installations when accessing cloud services", she confirmed. By offering these services as near as possible, delays and connectivity problems can be reduced. "Today, connected devices are not only available anywhere, but are rapidly becoming more complex. In this way, Ad-hoc Edge computing is a distributed and decentralised system that is formed from the resources available in IoT devices to exploit the entire computing capacity found in all types of devices connected at the edge of the network".

Taking these two concepts into consideration, her doctoral thesis focuses on the IoT Edge Cloud and how it can help improve the Internet of Things ecosystem that surrounds us. "My work addresses the challenges presented by IoT Edge Cloud devices whose infrastructure is in two main areas of research", explained Ana María. "At the resource management level, it prepares mechanisms to allow the dynamic formation of Ad-hoc Edge clusters and their management. This thesis also presents an admission control mechanism, together with an associated model to predict the resource availability of the participating IoT Edge devices", she added.

The future of self-driving cars and smart homes

The main contributions made by the Ad-hoc Edge Cloud architecture that is the subject of this study are based on three fundamental aspects, according to the researcher. The key aspects focus on being able to use the IoT devices themselves to do the computing that would otherwise be diverted to cloud services; reinforcing the security around this phenomenon, avoiding loss of information; and the decentralisation of the functions of these services, so there would be no single point of failure.

All this translates into an increase in possibilities. The applications of IoT are expanding their horizons thanks to studies such as the one in this doctoral thesis, since they can be applied more effectively in increasingly complicated environments. Self-driving cars exemplify this need: the necessary computing and connection of vehicles, supported by artificial intelligence that requires an incredible amount of data, makes Edge cloud computing almost a mandatory requisite, since without this technology it is very difficult to process all the information in an instant.

In smart homes , it will allow the creation of personal infrastructures between connected devices, while, in industrial facilities and plants, the formation of Edge Ad-hoc infrastructures between all the elements will allow their semi-autonomous operation. And we should not ignore drones , which can be used to inspect infrastructures, defining specific mechanisms for collaboration between fleets in order to increase coverage areas; or the coordination of vehicles with the smart city and connected roads.

In short, IoT is forging ahead on its inexorable path to a connected future . The challenges are increasing, but so is our knowledge and the tools available to face a world that a decade ago would have seemed like a science-fiction fantasy.

Ana María Juan Ferrer's thesis was supervised by the members of the Faculty of Computer Science, Multimedia and Telecommunications Joan Manuel Marquès Puig , a researcher with the Internet Computing & Systems Optimization ( ICSO ) group, and Josep Jorba Esteve , a Wireless Networks ( WINE ) researcher. Both groups are attached to the Internet Interdisciplinary Institute ( IN3 ), a UOC research centre.

This UOC research supports Sustainable Development Goal (SDG) 9 , to achieve an improvement in Industry, Innovation and Infrastructure.

Reference thesis

JUAN FERRER, Ana María. Ad-hoc formation of edge clouds over heterogeneous non-dedicated resources  [on line]. Universitat Oberta de Catalunya, 2020:  http://hdl.handle.net/10609/128326

UOC R&I

The UOC's research and innovation (R&I) is helping overcome pressing challenges faced by global societies in the 21st century, by studying interactions between technology and human & social sciences with a specific focus on the network society , e-learning and e-health . Over 500 researchers and 51 research groups work among the University's seven faculties and two research centres: the Internet Interdisciplinary Institute ( IN3 ) and the eHealth Center ( eHC ).

The United Nations' 2030 Agenda for Sustainable Development and open knowledge serve as strategic pillars for the UOC's teaching, research and innovation. More information: research.uoc.edu . #UOC25years

Experts UOC

Press contact.

You may also be interested in…

Most popular, related courses.

Loyola University > Center for Digital Ethics & Policy > Research & Initiatives > Essays > Archive > 2018 > Self-Driving Car Ethics

Self-driving car ethics, october 10, 2018.

Earlier this spring 49-year-old Elaine Herzberg was walking her bike across the street in Tempe, Ariz., when she was  hit and killed  by a car traveling at over 40 miles an hour.

There was something unusual about this tragedy: The car that hit Herzberg was driving on its own. It was an autonomous car being tested by Uber.

It’s not the only car crash connected to autonomous vehicles (AVs) as of late. In May, a Tesla on “autopilot” mode   accelerated briefly  before hitting the back of a fire truck, injuring two people.

The accidents unearthed debates that have long been simmering around the ethics of self-driving cars. Is this technology really safer than human drivers? How do we keep people safe while this technology is being developed and tested? In the event of a crash, who is responsible: the developers who create faulty software, the human in the driver’s seat who fails to recognize the system failure, or one of the hundreds of other hands that touched the technology along the way?

The need for driving innovation is clear: Motor vehicle deaths   topped  40,000 in 2017 according to the National Safety Council. A   recent study  by RAND Corporation estimates that putting AVs on the road once the technology is just 10 percent better than human drivers could save thousands of lives. Industry leaders continue to push ahead with development of AVs: Over $80 billion has been invested so far in AV technology, the Brookings Institute   estimated . Top automotive, rideshare and technology companies   including  Uber, Lyft, Tesla, and GM have self-driving car projects in the works. GM   has plans  to release a vehicle that does not need a human driver--and won’t even have pedals or a steering wheel--by 2019.

But as the above crashes indicate, there are questions to be answered before the potential of this technology is fully realized.

Ethics in the programming process

Accidents involving self-driving cars are usually due to sensor error or software error, explains Srikanth Saripalli, associate professor in mechanical engineering at Texas A&M University, in   The Conversation . The first issue is a technical one: Light Detection and Ranging (LIDAR) sensors won’t detect obstacles in fog, cameras need the right light, and radars aren’t always accurate. Sensor technology continues to develop, but there is still significant work needed for self-driving cars to drive safely in icy, snowy and other adverse conditions. When sensors aren’t accurate, it can cause errors in the system that likely wouldn’t trip up human drivers. In the case of Uber’s accident, the sensors identified Herzberg (who was walking her bike) as a pedestrian, a vehicle and finally a bike “with varying expectations of future travel path,” according to a National Transportation Safety Board (NTSB) preliminary   report  on the incident. The confusion caused a deadly delay--it was only 1.3 seconds before impact that the software indicated that emergency brakes were needed.

Self-driving cars are programmed to be rule-followers, explained Saripalli, but the realities of the road are usually a bit more blurred. In a 2017 accident in Tempe, Ariz., for example, a human-driven car attempted to turn left through three lanes of traffic and   collided  with a self-driving Uber. While there isn’t anything inherently unsafe about proceeding through a green light, a human driver might have expected there to be left-turning vehicles and slowed down before the intersection, Saripalli pointed out. “Before autonomous vehicles can really hit the road, they need to be programmed with instructions about how to behave when other vehicles do something out of the ordinary,” he writes.

However, in both the Uber accident that killed Herzberg and the Tesla collision mentioned above, there was a person behind the wheel of the car who wasn’t monitoring the road until it was too late. Even though both companies require that drivers keep their hands on the wheel and eyes on the road in case of a system error, this is a reminder that humans are prone to mistakes, accidents and distractions--even when testing self-driving cars. Can we trust humans to be reliable backup drivers when something goes wrong?

Further, can we trust that companies will be thoughtful--and ethical--about the expectations for backup drivers in the race for miles? Backup drivers who worked for Uber   told CityLab  that they worked eight to ten hour shifts with a 30 minute lunch and were often pressured to forgo breaks. Staying alert and focused for that amount of time is already challenging. With the false security of self-driving technology, it can be tempting to take a quick mental break while on the road. “Uber is essentially asking this operator to do what a robot would do. A robot can run loops and not get fatigued. But humans don’t do that,” an operator told CityLab.

The limits of the trolley scenario

Despite the questions that these accidents raise about the development process, the ethics conversation up to this point has largely been focused on the moment of impact. Consider the “ trolley problem ,” a hypothetical ethical brain teaser frequently brought up in the debate over self-driving cars. If an AV is faced with an inevitable fatal crash, whose life should it save? Should it prioritize the lives of the pedestrian? The passenger? Saving the most lives? Saving the lives of the young or elderly?

Ethical questions abound in every engineering and design decision, engineering researchers Tobias Holstein, Gordana Dodig-Crnkovic and Patrizio Pelliccione argue in their recent paper,   Ethical and Social Aspects of Self-Driving Cars , ranging from software security (can the car be hacked?) to privacy (what happens to the data collected by the car sensors?) to quality assurance (how often does a car like this need maintenance checks?). Furthermore, the researchers note that some ethics are directly at odds with the private industry’s financial incentives: Should a car manufacturer be allowed to sell cheaper cars outfitted with cheaper sensors? Could a customer choose to pay more for a feature that lets them influence the decision-making of the vehicle in fatal situations? How transparent should the technology be, and how will that be balanced with intellectual property that is vital to a competitive advantage?

The future impact of this technology hinges on these complex and bureaucratic “mundane ethics,” points out Johannes Himmelreich, interdisciplinary ethics fellow at Stanford University in   The Conversation . We need to recognize that big moral quandaries don’t just happen five seconds before the point of impact, he writes. Programmers could choose to optimize acceleration and braking to reduce emissions or improve traffic flow. But even these decisions pose big questions for the future of society: Will we prioritize safety or mobility? Efficiency or environmental concerns?

Ethics and responsibility

Lawmakers have already begun making these decisions. State governments and municipalities have scrambled to play host to the first self-driving car tests, in hopes of attracting lucrative tech companies, jobs and an innovation-friendly reputation. Arizona governor Doug Ducey has been one of the most vocal proponents,   welcoming Uber  when the company was kicked out of San Francisco for testing without a permit.

Currently there is a   patchwork  of   laws and executive orders  at the state level that regulate self-driving cars. Varying laws make testing and the eventual widespread roll-out more complicated and, as it is, it is likely that self-driving cars will need a   completely unique set  of safety regulations. Outside of the US, there has been more concrete discussion. Last summer Germany adopted   the world’s first ethical guidelines  for driverless cars. The rules state that human lives must take priority over damage to property and in the case of unavoidable human accident, a decision cannot be made based on “age, gender, physical or mental constitution,” among other stipulations.

There has also been discussion as to whether consumers should have the ultimate choice over AV ethics. Last fall, researchers at the European University Institute suggested the implementation of an “ ethical knob ,” as they call it, in which the consumer would set the software’s ethical decision-making to altruistic (preference for third parties), impartial (equal importance to all parties) or egoistic (preference for all passengers in the vehicle) in the case of an unavoidable accident. While their approach certainly still poses problems (a road in which every vehicle prioritizes the safety of its own passengers could create more risk), it does reflect public opinion. In a   series of surveys , researchers found that people believe in utilitarian ethics when in comes to self-driving cars--AVs should minimize casualties in the case of an unavoidable accident--but wouldn’t be keen on riding in a car that would potentially value the lives of multiple others over their own.

This dilemma sums up the ethical challenges ahead as self driving technology is tested, developed and increasingly driving next to us on the roads. The public wants safety for the most people possible, but not if it means sacrificing one’s own safety or the safety of loved ones. If people will put their lives in the hands of sensors and software, thoughtful ethical decisions will need to be made to ensure a death like Herzberg’s isn’t inevitable on the journey to safer roads.

Karis Hustad  is a Denmark-based freelance journalist covering technology, business, gender, politics and Northern Europe. She previously reported for  The Christian Science Monitor  and  Chicago Inno . Follow her on Twitter  @karishustad  and see more of her work at  karishustad.com .

Research & Initiatives

Return to top.

  • Support LUC
  • Directories
  • Symposia Archive
  • Upcoming Events
  • Past Events
  • Publications
  • CDEP in the News

© Copyright & Disclaimer 2024

Seven Arguments Against the Autonomous-Vehicle Utopia

All the ways the self-driving future won’t come to pass

People stand indoors near a silver self-driving car.

Self-driving cars are coming. Tech giants such as Uber and Alphabet have bet on it, as have old-school car manufacturers such as Ford and General Motors. But even as Google’s sister company Waymo prepares to launch its self-driving-car service and automakers prototype vehicles with various levels of artificial intelligence , there are some who believe that the autonomous future has been oversold—that even if driverless cars are coming, it won’t be as fast, or as smooth, as we’ve been led to think. The skeptics come from different disciplines inside and out of the technology and automotive industries, and each has a different bear case against self-driving cars. Add them up and you have a guide to all the ways our autonomous future might not materialize.

Bear Case 1: They Won’t Work Until Cars Are as Smart as Humans

Computers have nowhere near human intelligence. On individual tasks, such as playing Go or identifying some objects in a picture, they can outperform humans, but that skill does not generalize. Proponents of autonomous cars tend to see driving as more like Go: a task that can be accomplished with a far-lower-than-human understanding of the world. But in a duo of essays in 2017, Rodney Brooks, a legendary roboticist and artificial-intelligence researcher who directed the MIT Computer Science and Artificial Intelligence Laboratory for a decade, argued against the short-term viability of self-driving cars based on the sheer number of “edge cases,” i.e., unusual circumstances, they’d have to handle.

Read: The AI that has nothing to learn from humans

“Even with an appropriate set of guiding principles, there are going to be a lot of perceptual challenges … that are way beyond those that current developers have solved with deep learning networks, and perhaps a lot more automated reasoning than any AI systems have so far been expected to demonstrate,” he wrote . “I suspect that to get this right we will end up wanting our cars to be as intelligent as a human, in order to handle all the edge cases appropriately. ”

He still believes that self-driving cars will one day come to supplant human drivers. “Human driving will probably disappear in the lifetimes of many people reading this,” he wrote. “But it is not going to all happen in the blink of an eye.”

Bear Case 2: They Won’t Work, Because They’ll Get Hacked

Every other computer thing occasionally gets hacked, so it’s a near-certainty that self-driving cars will be hacked, too. The question is whether that intrusion—or the fear of it— will be sufficient to delay or even halt the introduction of autonomous vehicles.

Read: The banality of the Equifax breach

The transportation reporter and self-driving car skeptic Christian Wolmar once asked a self-driving-car security specialist named Tim Mackey to lay out the problem. Mackey “believes there will be a seminal event that will stop all the players in the industry in their tracks,” Wolmar wrote . ‘‘We have had it in other areas of computing, such as the big-data hacks and security lapses and it will happen in relation to autonomous cars.” Cars, even ones that don’t drive themselves, have already proved vulnerable to hackers .

The obvious counterargument is that data lapses, hacking, identity theft, and a whole lot of other things have done basically nothing to slow down the consumer internet. A lot of people see these problems and shrug . However, the physical danger that cars pose is far greater, and maybe the norms developed for robots will be different from those prevalent on the internet, legally and otherwise , as the University of Washington legal scholar Ryan Calo has argued.

Bear Case 3: They Won’t Work as a Transportation Service

Right now most companies working on self-driving cars are working on them as the prelude to a self-driving-car service. So you wouldn’t own your car; you’d just get rides from a fleet of robo-cars maintained by Waymo or Uber or Lyft. One reason for that is the current transportation-service companies can’t seem to find their way to profitability. In fact, they keep losing insane amounts of money . Take the driver out of the equation and maybe all of that money saved would put them in the black. At the same time, the equipment that’s mounted on self-driving cars to allow them to adequately convert physical reality into data is extremely expensive. Consumer vehicles with all those lasers and computers on board would be prohibitively expensive. On top of that, the question of calibrating and maintaining all that equipment would be entrusted to people like me, who don’t wash their car for months at a time.

Read: Will Uber and Lyft become different things?

Put these factors together and the first step in fully autonomous vehicles that most companies are betting on is to sell robo-car service, not robo-cars.

There is a simple rejoinder to why this might not work. George Hotz, who is himself attempting to build a DIY driving device, has a funny line that sums it up. “They already have this product, it’s called Uber, it works pretty good,” Hotz told The Verge . And what is a robo-car ride if not “a worse Uber”?

Bear Case 4: They Won’t Work, Because You Can’t Prove They’re Safe

Commercial airplanes rely heavily on autopilot, but the autopilot software is considered provably safe because it does not rely on machine-learning algorithms. Such algorithms are harder to test because they rely on statistical techniques that are not deterministic. Several engineers have questioned how self-driving systems based on machine learning could be rigorously screened. “Most people, when they talk about safety, it’s ‘Try not to hit something,’” Phil Koopman, who studies self-driving-car safety at Carnegie Mellon University, told Wired this year. “In the software-safety world, that’s just basic functionality. Real safety is, ‘Does it really work?’ Safety is about the one kid the software might have missed, not about the 99 it didn’t.”

Regulators will ultimately decide if the evidence that self-driving-car companies such as Waymo have compiled of safe operation on roads and in simulations meets some threshold of safety. More deaths caused by autonomous vehicles, such as an Uber’s killing of Elaine Herzberg , seem likely to drive that threshold higher.

Koopman, for one, thinks that new global standards like the ones we have for aviation are needed before self-driving cars can really get on the road, which one imagines would slow down the adoption of the cars worldwide.

Bear Case 5: They’ll Work, But Not Anytime Soon

Last year, Ford announced plans to invest $1 billion in Argo AI, a self-driving-car company. So it was somewhat surprising when Argo’s CEO, Bryan Salesky, posted a pessimistic note about autonomous vehicles on Medium shortly after. “We’re still very much in the early days of making self-driving cars a reality,” he wrote . “Those who think fully self-driving vehicles will be ubiquitous on city streets months from now or even in a few years are not well connected to the state of the art or committed to the safe deployment of the technology.”

In truth, that’s the timeline the less aggressive carmakers have put forth. Most companies expect some version of self-driving cars in the 2020s, but when within the decade is where the disagreement lies.

Bear Case 6: Self-Driving Cars Will Mostly Mean Computer-Assisted Drivers

While Waymo and a few other companies are committed to fully driverless cars or nothing, most major carmakers plan to offer increasing levels of autonomy , bit by bit. That’s GM’s play with the Cadillac Super Cruise. Daimler, Nissan, and Toyota are targeting the early 2020s for incremental autonomy.

Read: The most important self-driving car announcement yet

Waymo’s leadership and Aurora’s Chris Urmson worry that disastrous scenarios lie down this path. A car that advertises itself as self-driving “should never require the person in the driver’s seat to drive. That hand back [from machine to human] is the hard part,” Urmson told me last year . “If you want to drive and enjoy driving, God bless you, go have fun, do it. But if you don’t want to drive, it’s not okay for the car to say, ‘I really need you in this moment to do that.’”

Bear Case 7: Self-Driving Cars Will Work, But Make Traffic and Emissions Worse

And finally, what if self-driving works, technically, but the system it creates only “solve[s] the problem of ‘I live in a wealthy suburb but have a horrible car commute and don’t want to drive anymore but also hate trains and buses,’” as the climate advocate Matt Lewis put it . That’s what University of California at Davis researchers warn could happen if people don’t use (electric-powered) self-driving services and instead own (gasoline-powered) self-driving cars. “Sprawl would continue to grow as people seek more affordable housing in the suburbs or the countryside, since they’ll be able to work or sleep in the car on their commute,” the scenario unfolds . Public transportation could spiral downward as ride-hailing services take share from the common infrastructure.

And that’s not an unlikely scenario based on current technological and market trends. “Left to the market and individual choice, the likely outcome is more vehicles, more driving and a slow transition to electric cars,” wrote Dan Sperling, the director of the UC Davis Institute of Transportation Studies, in his 2018 book, Three Revolutions: Steering Automated, Shared, and Electric Vehicles to a Better Future .

It would certainly be a cruel twist if self-driving cars managed to save lives on the road while contributing to climate catastrophe. But if the past few years of internet history have taught us anything, any technology as powerful and society-shaping as autonomous vehicles will certainly have unintended consequences. And skeptics might just have a handle on what those could be.

  • Gabriela Avila
  • Kennedy Dick
  • Casey Ferrara
  • Carley Giles
  • Lauren Hoffman
  • Philip Hurier
  • Cole Kaucheck
  • Dexter Kowalski
  • Laura Lenhart
  • Mitchell Magee
  • Eunice Park
  • Tessa Wheeler
  • Leila Akberdin
  • Maria Basile
  • Blake Charles
  • Aaron Cochran
  • Emma Coughlin
  • August Majtenyi
  • Kai McKinney
  • Benjamin Mosher
  • Julia Spencer
  • Lauren Todd
  • Ruohan Wang
  • Sandra Allen
  • Kate Belliveau
  • Meredith Billman
  • Conner Buel
  • Allison Cashman
  • Luiza Corrêa
  • Calvin Dolatowski
  • Lauren Goslee
  • Blaine Hafen
  • Allison Krish
  • Josh Leidich
  • Boston Logan
  • Sabrina Young
  • Princess Egbo (Priority)
  • Sam Gaerke (CDME)
  • Summer Geissman (Honda)
  • Hayley Gregor (Huntington)
  • Mylo Johnson (Desis Lab)
  • Karl Ludwig (CDME)
  • Theresa Pham (Battelle)
  • Karam Ramadan (Battelle)
  • Annie Roo (Battelle)
  • Kaitlyn Smith (Priority)
  • Sophie Stefanski (Desis Lab)
  • Erika Stravinsky (Desis Lab)
  • Alex Nonato (Honda)
  • Matthew Turnquest (Priority)
  • Amelia Walker (Huntington)
  • Annie Waugh (Huntington)
  • Madyson Webb (Honda)
  • Maryam Alihoseini
  • Priyanka Chowdhury
  • Dominique Gedanke Flaksberg
  • Catalina Munoz Arias
  • Heloisa Rocha Rincon
  • William Yuan
  • Chuck Backus
  • Maria Bowman
  • Jackie Brandon
  • Avery Caiazza
  • Hattie Carr
  • Jason Dionisio
  • Olivia Doland
  • Korene Embrack
  • Grace Gerber
  • Nolan green
  • Sarah Kocher
  • Danny Kraft
  • Easton Nguyen
  • Zoe Shay-Tannas

self driving car thesis

  • Science and Tech

Self-Driving Cars: Everything You Need to Know

self driving car thesis

Author: Sean Tucker

Date Published: 8/03/2021

Publisher: Kelley Blue Book

Link: https://www.kbb.com/car-advice/self-driving-cars/

This guide will walk you through what you need to know about automotive autopilot, self-driving technology, and driver aids today and tomorrow.

What is a Self-Driving Car?

From engineering jargon to marketing speak, the lingo continues to evolve in this field. Roughly speaking, you can sort the technologies people might refer to as self-driving into two categories — driver support and automation systems.

Driver Support

Driver support technology reduces the workload on the driver. Today, most automakers sell various driver support systems, either as standard equipment or as options on their cars. These include intelligent or adaptive cruise control, lane-keeping assists, and hands-free capability.

Autonomous Systems

Autonomous systems do the driving for you. No automaker today sells a true autonomous system, but some are pushing toward that technology. One such project underway is Waymo, a sister company to Google, that is testing autonomous rideshare vehicles in Phoenix using converted Chrysler Pacifica minivans.

Six Levels of Self-Driving Technology

SAE International , a global association of engineers and related technical experts in the aerospace, automotive, and commercial-vehicle industries, has laid out a useful framework for thinking about self-driving systems. They sort the technologies into six levels, labeled zero through five.

However, not every level is classified as autonomous driving. According to SAE, levels zero through two are considered driver support features, while levels three through five are classified as having autonomous capability.

At Level 0, the car reacts only to the driver’s input. Even if it uses sensors to warn you of surrounding traffic, like a blind-spot alert system or a lane-departure warning, it still has no self-driving capability to correct or counter the perceived threat.

At Level 1, your car can intervene slightly in your driving in an attempt to keep you safe. A lane-keeping system that helps steer to center you in a lane is a Level 1 technology.

At Level 2, features communicate with one another, and more than one can be active simultaneously. An example of this autonomous technology is an adaptive cruise control system that adjusts your speed to keep you a certain distance from the car ahead while centering the car in its lane.

Currently, Level 2 systems are the most sophisticated technology sold on cars in America. Some automakers describe these systems in ways that make them seem more advanced than Level 2 standards because they allow drivers to take their hands off the steering wheel briefly.

However, all these systems require drivers to keep their eyes focused ahead. Drivers need to be ready at all times to take over control of the car at a moment’s notice.

At Level 3, the car can drive itself under limited conditions, but the driver must remain aware and prepared to take over. Automakers have tested Level 3 systems that will allow the driver to take their hands off the wheel in a traffic jam, for instance, but prompt the driver to take over when the congestion eases.

According to  Honda  and a few other reputable sites, the Honda 100 Legend Flagship car is the first Level 3 autonomous car. Right now, it’s only available in Japan for leasing. It was released on March 5, 2021.

There are no Level 3 systems currently sold to consumers in the U.S.

At Level 4, the car can drive itself in a fixed loop on known roads. The rider is not required to take over driving at any time. These vehicles may or may not have a steering wheel or pedals. In some places, Level 4 driverless rideshare vehicles (like Waymo’s) are in limited testing. But they are not yet approved for general use in any state.

At Level 5, the car can drive itself under any conditions and on any road. These vehicles do not have steering wheels or pedals. At this point, Level 5 systems are theoretical.

Do You Still Need to Pay Attention to the Road?

Yes. Always. Even when using driving assist technology in Levels 0 to 3, it’s required that you always keep your eyes on the road.

However, when using low-speed applications, including self-parking features, keeping your eyes on the road or staying inside the vehicle may not be needed. For example, some luxury brands offer a self-parking remote that handles this maneuver for things like parallel parking.

Which Cars Have Self-Driving Capability?

Virtually every automaker selling cars in the U.S. today offers driver-assistance systems that can reduce the workload on the driver. These include adaptive cruise control that can adjust speed to maintain distance from the car ahead or automatic emergency braking that can slow or stop the car to avoid hitting a vehicle or pedestrian or reduce the severity of a crash.

None of these systems are so reliable that the driver can take their attention from the task of driving, though.

Many manufacturers currently market systems up to and including Level 2 automation. This approach combines adaptive cruise control and lane-keeping assist into a system that requires that the driver keep their hands on the wheel but relieves some of the driver’s workload.

A prime example is cruise control with stop-and-go capability that allows the driver to negotiate heavy traffic without using the pedals.

The Future of Self-Driving Cars

Engineers from more than a dozen companies are testing self-driving systems in hopes of producing an SAE Level 5 self-driving car. It seems safe to predict that the technology is coming.

But the engineering challenge of getting there is immense. A car that can drive itself on well-maintained roads may make a critical mistake on poorly maintained roads. What if a car that can react safely to normal traffic may not react safely to unusual situations? A car that can do everything engineers ask of it may fail when presented with a problem they never considered (in one recent incident, a self-driving car in testing was baffled by a truck bed full of traffic signs being delivered to a construction site. The car had no idea what to do).

Beyond the engineering challenge, 50 sets of state laws (plus the District of Columbia) must adapt to decide safety and liability issues before self-driving cars can become common.

The market will also have its say. Volkswagen recently unveiled a concept car that would charge by the mile for self-driving capability. Executives reasoned that as long as getting your car to drive you somewhere costs less than a train ticket to that same place, they could charge for using the self-driving feature. So, while some automakers hope to charge buyers upfront for automation, others may make it available for short-term rental only.

Lastly, there’s the matter of marketing. It’s already growing difficult to sort what manufacturers claim their cars can do from what they can actually do. That will only grow cloudier as the technology advances.

So, while you may be able to own a self-driving car in your lifetime, it may be further away than advancing technology would indicate.

This article will likely be one of the more important ones in the dossier, as it provides a significant amount of information related to the levels of autonomy currently assigned to vehicle, and what each respective one means. With this understanding, we can successfully determine the level of autonomy that our vehicle seat design is going into, and can design according to user needs more successfully; for example, a solution for a level 2 vehicle could look drastically different from a level 5, so we need to take this specifically defined terminology into consideration.

Additionally, this article poses several jarring questions about the future of autonomous vehicles, and problems that can arise when they aren’t designed and engineered correctly. Understanding this will help us better understand the feelings of the user and will help us design a better transitional space that could lead to a better acclimation experience to autonomous vehicles released in the future.

Tucker, S. (2021, August 3). Self-Driving Cars: Everything You Need to Know . Retrieved September 22, 2021, from https://www.kbb.com/car-advice/self-driving-cars/.

RELATED ARTICLES MORE FROM AUTHOR

self driving car thesis

The Hulken bag makes schlepping a thing of the past

self driving car thesis

Best Off-Road Lights For 2023

self driving car thesis

2023 Honda Ridgeline

Leave a reply cancel reply.

Log in to leave a comment

Home

  • Announcements

Self-Driving Car Ethics

Earlier this spring 49-year-old Elaine Herzberg was walking her bike across the street in Tempe, Ariz., when she was hit and killed by a car traveling at over 40 miles an hour.

There was something unusual about this tragedy: The car that hit Herzberg was driving on its own. It was an autonomous car being tested by Uber.

It’s not the only car crash connected to autonomous vehicles (AVs) as of late. In May, a Tesla on “autopilot” mode accelerated briefly before hitting the back of a fire truck, injuring two people.

The accidents unearthed debates that have long been simmering around the ethics of self-driving cars. Is this technology really safer than human drivers? How do we keep people safe while this technology is being developed and tested? In the event of a crash, who is responsible: the developers who create faulty software, the human in the driver’s seat who fails to recognize the system failure, or one of the hundreds of other hands that touched the technology along the way?

The need for driving innovation is clear: Motor vehicle deaths topped 40,000 in 2017 according to the National Safety Council. A recent study by RAND Corporation estimates that putting AVs on the road once the technology is just 10 percent better than human drivers could save thousands of lives. Industry leaders continue to push ahead with development of AVs: Over $80 billion has been invested so far in AV technology, the Brookings Institute estimated . Top automotive, rideshare and technology companies including Uber, Lyft, Tesla, and GM have self-driving car projects in the works. GM has plans to release a vehicle that does not need a human driver--and won’t even have pedals or a steering wheel--by 2019.

But as the above crashes indicate, there are questions to be answered before the potential of this technology is fully realized.

Ethics in the programming process

Accidents involving self-driving cars are usually due to sensor error or software error, explains Srikanth Saripalli, associate professor in mechanical engineering at Texas A&M University, in The Conversation . The first issue is a technical one: Light Detection and Ranging (LIDAR) sensors won’t detect obstacles in fog, cameras need the right light, and radars aren’t always accurate. Sensor technology continues to develop, but there is still significant work needed for self-driving cars to drive safely in icy, snowy and other adverse conditions. When sensors aren’t accurate, it can cause errors in the system that likely wouldn’t trip up human drivers. In the case of Uber’s accident, the sensors identified Herzberg (who was walking her bike) as a pedestrian, a vehicle and finally a bike “with varying expectations of future travel path,” according to a National Transportation Safety Board (NTSB) preliminary report on the incident. The confusion caused a deadly delay--it was only 1.3 seconds before impact that the software indicated that emergency brakes were needed.

Self-driving cars are programmed to be rule-followers, explained Saripalli, but the realities of the road are usually a bit more blurred. In a 2017 accident in Tempe, Ariz., for example, a human-driven car attempted to turn left through three lanes of traffic and collided with a self-driving Uber. While there isn’t anything inherently unsafe about proceeding through a green light, a human driver might have expected there to be left-turning vehicles and slowed down before the intersection, Saripalli pointed out. “Before autonomous vehicles can really hit the road, they need to be programmed with instructions about how to behave when other vehicles do something out of the ordinary,” he writes.

However, in both the Uber accident that killed Herzberg and the Tesla collision mentioned above, there was a person behind the wheel of the car who wasn’t monitoring the road until it was too late. Even though both companies require that drivers keep their hands on the wheel and eyes on the road in case of a system error, this is a reminder that humans are prone to mistakes, accidents and distractions--even when testing self-driving cars. Can we trust humans to be reliable backup drivers when something goes wrong?

Further, can we trust that companies will be thoughtful--and ethical--about the expectations for backup drivers in the race for miles? Backup drivers who worked for Uber told CityLab that they worked eight to ten hour shifts with a 30 minute lunch and were often pressured to forgo breaks. Staying alert and focused for that amount of time is already challenging. With the false security of self-driving technology, it can be tempting to take a quick mental break while on the road. “Uber is essentially asking this operator to do what a robot would do. A robot can run loops and not get fatigued. But humans don’t do that,” an operator told CityLab.

The limits of the trolley scenario

Despite the questions that these accidents raise about the development process, the ethics conversation up to this point has largely been focused on the moment of impact. Consider the “ trolley problem ,” a hypothetical ethical brain teaser frequently brought up in the debate over self-driving cars. If an AV is faced with an inevitable fatal crash, whose life should it save? Should it prioritize the lives of the pedestrian? The passenger? Saving the most lives? Saving the lives of the young or elderly?

Ethical questions abound in every engineering and design decision, engineering researchers Tobias Holstein, Gordana Dodig-Crnkovic and Patrizio Pelliccione argue in their recent paper, Ethical and Social Aspects of Self-Driving Cars , ranging from software security (can the car be hacked?) to privacy (what happens to the data collected by the car sensors?) to quality assurance (how often does a car like this need maintenance checks?). Furthermore, the researchers note that some ethics are directly at odds with the private industry’s financial incentives: Should a car manufacturer be allowed to sell cheaper cars outfitted with cheaper sensors? Could a customer choose to pay more for a feature that lets them influence the decision-making of the vehicle in fatal situations? How transparent should the technology be, and how will that be balanced with intellectual property that is vital to a competitive advantage?

The future impact of this technology hinges on these complex and bureaucratic “mundane ethics,” points out Johannes Himmelreich, interdisciplinary ethics fellow at Stanford University in The Conversation . We need to recognize that big moral quandaries don’t just happen five seconds before the point of impact, he writes. Programmers could choose to optimize acceleration and braking to reduce emissions or improve traffic flow. But even these decisions pose big questions for the future of society: Will we prioritize safety or mobility? Efficiency or environmental concerns?

Ethics and responsibility

Lawmakers have already begun making these decisions. State governments and municipalities have scrambled to play host to the first self-driving car tests, in hopes of attracting lucrative tech companies, jobs and an innovation-friendly reputation. Arizona governor Doug Ducey has been one of the most vocal proponents, welcoming Uber when the company was kicked out of San Francisco for testing without a permit.

Currently there is a patchwork of laws and executive orders at the state level that regulate self-driving cars. Varying laws make testing and the eventual widespread roll-out more complicated and, as it is, it is likely that self-driving cars will need a completely unique set of safety regulations. Outside of the US, there has been more concrete discussion. Last summer Germany adopted the world’s first ethical guidelines for driverless cars. The rules state that human lives must take priority over damage to property and in the case of unavoidable human accident, a decision cannot be made based on “age, gender, physical or mental constitution,” among other stipulations.

There has also been discussion as to whether consumers should have the ultimate choice over AV ethics. Last fall, researchers at the European University Institute suggested the implementation of an “ ethical knob ,” as they call it, in which the consumer would set the software’s ethical decision-making to altruistic (preference for third parties), impartial (equal importance to all parties) or egoistic (preference for all passengers in the vehicle) in the case of an unavoidable accident. While their approach certainly still poses problems (a road in which every vehicle prioritizes the safety of its own passengers could create more risk), it does reflect public opinion. In a series of surveys , researchers found that people believe in utilitarian ethics when in comes to self-driving cars--AVs should minimize casualties in the case of an unavoidable accident--but wouldn’t be keen on riding in a car that would potentially value the lives of multiple others over their own.

This dilemma sums up the ethical challenges ahead as self driving technology is tested, developed and increasingly driving next to us on the roads. The public wants safety for the most people possible, but not if it means sacrificing one’s own safety or the safety of loved ones. If people will put their lives in the hands of sensors and software, thoughtful ethical decisions will need to be made to ensure a death like Herzberg’s isn’t inevitable on the journey to safer roads.

Karis Hustad is a Denmark-based freelance journalist covering technology, business, gender, politics and Northern Europe. She previously reported for The Christian Science Monitor and Chicago Inno. Follow her on Twitter @karishustad and see more of her work at karishustad.com . 

Add new comment

Restricted html.

  • Allowed HTML tags: <a href hreflang> <em> <strong> <cite> <blockquote cite> <code> <ul type> <ol start type> <li> <dl> <dt> <dd> <h2 id> <h3 id> <h4 id> <h5 id> <h6 id>
  • Lines and paragraphs break automatically.
  • Web page addresses and email addresses turn into links automatically.

Copyright © Center for Digital Ethics & Policy 2010-2017.

Privacy Policy

ESSAY SAUCE

ESSAY SAUCE

FOR STUDENTS : ALL THE INGREDIENTS OF A GOOD ESSAY

Essay: Self-driving cars

Essay details and download:.

  • Subject area(s): Engineering essays
  • Reading time: 11 minutes
  • Price: Free download
  • Published: 4 February 2019*
  • File format: Text
  • Words: 2,979 (approx)
  • Number of pages: 12 (approx)
  • Tags: Artificial intelligence essays

Text preview of this essay:

This page of the essay has 2,979 words. Download the full version above.

Self-driving cars. Ten years ago, something like this would seem completely unbelievable. Now however, it’s becoming our reality. A lot of questions come to mind with this subject: how will this affect our lives? How will this affect our future? Is this good for us? Is this bad? These questions must be strongly considered and answered while looking at this subject. I think that this is a promising new field, but that makes a person ask many serious questions, and even involves the entire concept of comparing artificial intelligence to human intelligence. I think that while this type of vehicle has promise, it’s very hard to choose artificial intelligence over human intelligence. A lot of things must be figured out before this type of car can seriously challenge the people’s preference for cars that they have to drive themselves.         

  What are self-driving cars? Well, as is clear due to the name itself, these are cars that don’t need a driver to be driven. These cars use an artificial intelligence system to decide everything that is typically decided by the driver themselves. A person can presumably input the destination, and the car will do the rest on its own. In other words, it’s a form of a taxi cab owned by the passenger themselves. There is however much more to this than just that, these cars also make decisions normally made by human drivers, such as choosing the best routes and even calculating how to cause the least casualties in an accident. This is a very controversial subject, as these cars may prefer the owner/passenger die instead of others if it causes the least overall casualties. It’s not yet clear as to all the things that these cars will be able to do, but the general basics are clear: you sit back and let the car do the rest. An interesting question that isn’t often asked, is whether a person will need to have a driver’s license to be in this car. If the car does all the work, then why does a person need to know how to drive? Will there be an option for the person to drive the car themselves if they choose to do that? Unfortunately, these are all currently unanswered questions, what we do know for a fact, is that these are cars that drive the person themselves.

                The idea that cars should drive themselves is as old as cars themselves. Putting this idea into motion however was only possible in modern times. The earliest prototype of such a car was the 1925 car made by Arden Motors, called the “Chandler”. This idea was also promoted by General Motors in the 1930’s and shown off at the world fair of 1939. It was even predicted that these cars would be common in the US by the 1960’s. In 1953, RCA Labs built a miniature prototype of such a car, again promoting this as a serious future option for consumers. The common issue with all these designs however, was that none of these were practical vehicles that people could buy or trust to work properly. These were all ideas and predictions but not practical working concepts. General Motors went a step further and created a series of cars called “Firebirds”, that were supposed to be self-driven cars that would be on the market by 1975. This became a popular topic in the media and led to many interested journalists and reporters to be allowed to test drive these cars. The excitement was there but the cars still were not able to be put on the market.

                                    The 1960’s saw Ohio State University and the Bureau of Public Roads to continue the pursuit of putting this type of car on the market. The attempts however were again hard to get off the ground, and simple prototypes were the only thing that was able to be completed. Great Britain’s Transport and Road Research Laboratory was next to try and fail at this idea. In this version of the idea, magnetic cables were embedded in the roads, and a system called Citroen DS interacted with them to move the cars across roads. In the 1970’s, the Bendix corporation worked with Stanford University, to work on a concept involving cables that were buried in the ground, and that helped move cars on the road. I think that it is obvious why this didn’t work out in the end either. I think that it is important to mention that funding was a major problem for many of these ideas. As can be easily assumed, none of these features could possibly be done at affordable rates, and that they required large amounts of labor and large changes to the roads to accommodate these changes.

                              The Germans decided to get into this field in the 1980’s. Mercedes-Benz launched their own version of such a car, but their version could not move faster than 39 miles per hour, a number that was clearly far below the speed of an average car. Multiple American universities were next. Universities of Maryland and Michigan created prototypes that were able to travel on hard terrains at different speeds, but again that were not very fast. It seemed that the ability to make these cars fast was, yet another problem faced by the developers. In 1991, the United States Congress passed the ISTEA Transportation Authorization bill that pushed for a creation of an automatic transport system by 1997. By the late 1990’s, the university of Parma in Italy, and Daimler-Benz were able to create vehicles that could reach the speed of 81 mph. The issue of funding and efficient mass production however continued to plague these new advancements. The 2000’s saw even more progress, as Germany invented a “Spirit of Berlin” taxicab, and the Netherlands invented the ParkShuttle. Neither of these options was able to fully replace human driven transportation services, but they managed to be effective means of transportation regardless. By the end of the decade, most of the major car companies were working on making self-driven cars. Mercedes-Benz, Audi, Tesla, Toyota… are some of the more notable companies that were working on prototypes. Uber and Lyft began developing self-driven taxicabs in recent years to save money on drivers, and by making their business to run more smoothly and efficiently. In 2018, a woman was killed by an automated Uber vehicle, and Audi officially announced the release of a mass-produced line of self-driven cars.

                 What can be predicted about the future of this technology? Logically we can assume that with the current state of technology, better cars will be released, and practical self-driven cars will be readily available to the public. Will this idea take-off with the public? That is the harder question. There is really nothing that can be seriously predicted about how the public will react to this. Personally, I think that it will decades before people will be ready to replace cars that they can drive themselves, with self-driven cars. Why do I think so? I think that many people love driving and would not want to let “somebody else” do it for them. It’s also reasonable to assume that taxicabs may be less expensive than buying a self-driven car. There is also the issue with the cost. How much will these cars cost? Will the average driver be able to afford such a car? Will it be popular among the general population? There are too many unanswered and hard to answer questions about the topic. I do have one concern that comes to mind… the Industrial Revolution threatened entire industries, as many people lost jobs to basically machines. How many taxi drivers would be needed with self-driven cars in the equation? How many bus drivers would be required?

               What kind of impact will such cars have on the general population?  How will this affect hardware? How will this affect future software? How will it affect data? I think that this concept becoming more popular, will lead to increased funding for the development of new hardware and software pertaining to self-driven cars. It will also likely lead to new ideas for other areas. What about self-working computers? What about self-working irons and laundry machine/robots? There are a lot of concepts that can be thought of by thinking of self-operating hardware and software. There are a lot of things that can be thought of by simply thinking of self-working hardware and software. I think that it will lead to a major development of these concepts. It will also lead to major advancements in software in general, as well as artificial intelligence. If companies can successfully build artificial intelligence systems that will drive cars by themselves, a lot of other things can be made self-controlled as well. One thing that I think could be done successfully is computers that can-do things for you, for example your taxes or other accounting related tasks. I can even imagine self-driven planes and boats. Basically, there are a lot of advancements that can be made through self-working technology. It’s possibl e that driving a car will become less of a priority for people, and getting a driver’s license might become more of a novelty than a necessity. I also think that NASCAR and the popularity of racing can be affected by the popularity of self-made cars as well as the whole culture of driving. The main question for me is the cost of these cars. The affordability or lack of it will be a major reason as to why or why not this business concept will work. I have my doubts over the topic, as I think that many people enjoy driving and would not want to give it up. There is also a wide variety of taxicab services that are cheaper alternatives to owning a self-driving car. I’m also unclear on whether sports cars will be possible to be self-driven as well. The latter is important because of the popularity of such cars.

                     How will this technology change the way business is conducted? The main thing that comes to mind is the lack of a driver’s license when purchasing a car? Is it possible that there would be a lack of an age limit to buy a car too? I think that businesses would also come up with new marketing strategies to sell these cars, since driving would no longer be an important part of the marketing pitch. It would also be a potential issue for Uber and Lyft, as well as other taxicab and car service companies. It might even affect limo companies, as wealthy people might prefer very expensive self-driven cars. The big thing that comes to mind is that driving would not be an important component of owning a car. I think that it’s common sense that any change in business would lead to companies adjusting their strategies and marketing campaigns, and focusing on different areas to promote their ideas. It could also affect other businesses entirely as they would focus more on self-working concepts and products. As I mentioned earlier, a laundromat can use some type of a laundry machine/robot that would be doing laundry for you. Phone companies could come up with cell phones that work automatically in some way, and come up with phone plans for self-driving cars. Why? Well, now it would no longer be illegal to talk on the phone in your car. I mean why would it be when it wouldn’t distract you from driving? What about television screens in cars? The owner/passenger now has free time, doesn’t that seem to be a new business opportunity for companies like Netflix? I think that these would be the main things affected by self-driving cars and similar technology. Every new invention that changes the way people normally do things, is bound to change the marketplace and affect the way that companies handle their business expenditures.

                       How would self-driving cars affect competition between companies? There would be no reason for companies to focus on driving as a major part of their selling pitches. Commercials would no longer advertise the handling and driving of cars, as the person would not actually be driving it. Companies would compete for having technology in which the person would have to do the least to make it work. Companies would try to gain a competitive advantage against one another by adding features that would make the product do as much as possible by itself. I can imagine cars that incorporate other technology, can you imagine cars that do accounting for you? What about a competing company that makes a car that can call companies and have conversations for you? What about cars that would make decisions for you? What about cars that act as your secretaries while driving you? There is almost a limitless amount of possibilities that can be accomplished by a company aiming to stand out. This type of technology can of course be applied to other technologies as well. So now we’re talking about cell phones that call for you, cell phones that make decisions for you…. Basically, companies would take this technology to the extreme to compete. The spirit of competition has driven many industries to unprecedented highs, and so this industry will likely be no exception to this rule. The question would ultimately be about which companies would stay ahead of the curve, and which ones would not.

                                 How do self-driving cars affect society in a global way? Well if this concept takes off, then countries will try to keep up with each other by improving on the technology and by attempting to avoid being “behind” others. It will be a major driving factor in the competition between major companies, and create new forms of advancement in other technologies. The global impact of such a technology is enormous and would change a lot of things as we know them. It’s certainly not going to be an isolated idea that only affects one country and one field, it will affect the whole world and affect multiple industries, including those that have nothing to do with the automobile industry.

                         Is there an ethical side to self-driving cars? A major question that comes to mind is whether it is a good idea to trust so much in artificial intelligence. What would happen if someone who doesn’t know how to drive is faced with a malfunctioning vehicle? What happens if these vehicles cause a multitude of accidents? Is it a good idea for our society to become more “lazy”? Should we really try to have something else do as much of our work as possible? This is an issue that can be debated ad nauseum without a generally agreed upon answer or a solution. Personally, I think that giving so much authority to machines is dangerous, how long before we start putting machines in leadership positions and becoming completely incompetent without them? We rely on the internet, cell phones, cars, and social media daily, how would many of us survive if all these options were taken away?  Why do we need a car to drive itself? Why can’t a person do it themselves? Why is this improvement even needed? There seems to be an endless supply of questions on this subject. Personally, I think that my position on the subject has been made clear. I don’t think that self-driven cars are as much a necessity as it seems to be, and that the current state of transportation is a better and more efficient way of doing things.

                             What are the legal repercussions of self-driving cars? What happens if the car owner gets into an accident? Is the person responsible or is the car? If it’s the car, what’s going to happen next? Obviously, nobody will arrest the car, so does this mean that no one is in trouble if their car runs someone over? How do we define right and wrong when it comes to artificial intelligence? Will any of these cars be able to both be controlled by people and artificial intelligence? In that case, could someone run another person over with a car and then blame it on artificial intelligence? How would law enforcement be able to prove what happened? Would the company itself be responsible? Once again, we enter a new reality filled with many different possibilities and new rules required to administer them. It seems pretty clear to me that self-driving cars will need a whole new set of laws to determine accidents that will almost certainly happen regardless of whether the driver is human or not.

As the advancement of self-sustaining technology arises, so does the general concern that I stated earlier. Driverless cars can either be a technology that benefits the population or that is detrimental to society. From the information that I found and my own opinion on these cars, my views are that it would be detrimental. Specifically, due to the life – death calculations that the artificial intelligence can make. An example being to avoid multiple causalities, AI may calculate that putting your life on the line is the correct way to go. I’d argue this as being something the AI should never be able to decide. More or less because it can’t use emotional intuition to make choices that involve life or death . All things considered we came a long way with our technology, and so did the concept of cars that drive themselves.  Our society is bound to be affected by a step of this magnitude, but a lot of factors must be taken into consideration, to make a true judgment on the matter. Self-driving cars will either change driving as we know it, or become a failed attempt to fix something that did not need fixing.

...(download the rest of the essay above)

Discover more:

  • Artificial intelligence essays

Recommended for you

  • Biometric Security and Privacy
  • AI’s fundamental drives
  • The analysis of sentiment

About this essay:

If you use part of this page in your own work, you need to provide a citation, as follows:

Essay Sauce, Self-driving cars . Available from:<https://www.essaysauce.com/engineering-essays/self-driving-cars/> [Accessed 07-04-24].

These Engineering essays have been submitted to us by students in order to help you with your studies.

* This essay may have been previously published on Essay.uk.com at an earlier date.

Essay Categories:

  • Accounting essays
  • Architecture essays
  • Business essays
  • Computer science essays
  • Criminology essays
  • Economics essays
  • Education essays
  • Engineering essays
  • English language essays
  • Environmental studies essays
  • Essay examples
  • Finance essays
  • Geography essays
  • Health essays
  • History essays
  • Hospitality and tourism essays
  • Human rights essays
  • Information technology essays
  • International relations
  • Leadership essays
  • Linguistics essays
  • Literature essays
  • Management essays
  • Marketing essays
  • Mathematics essays
  • Media essays
  • Medicine essays
  • Military essays
  • Miscellaneous essays
  • Music Essays
  • Nursing essays
  • Philosophy essays
  • Photography and arts essays
  • Politics essays
  • Project management essays
  • Psychology essays
  • Religious studies and theology essays
  • Sample essays
  • Science essays
  • Social work essays
  • Sociology essays
  • Sports essays
  • Types of essay
  • Zoology essays

IMAGES

  1. Future of Self Driving Cars Essay

    self driving car thesis

  2. Explicating the Impact of Self-Driving Cars on People's Lives

    self driving car thesis

  3. Infographic: Self-driving cars and transportation

    self driving car thesis

  4. Applied Sciences

    self driving car thesis

  5. (PDF) Self-Driving Cars

    self driving car thesis

  6. (PDF) The Self-Driving Car

    self driving car thesis

COMMENTS

  1. PDF Self-Driving Car Autonomous System Overview

    Self-Driving Car Autonomous System Overview - Industrial Electronics Engineering - Bachelors' Thesis - Author: Daniel Casado Herráez Thesis Director: Javier Díaz Dorronsoro, PhD Thesis Supervisor: Andoni Medina, MSc San Sebastián - Donostia, June 2020 .

  2. Self-driving Cars: The technology, risks and possibilities

    Essentially, a self-driving car needs to perform three actions to be able to replace a human driver: to perceive, to think and to act (Figure 1). These tasks are made possible by a network of high-tech devices such as cameras, computers and controllers. Figure 1: Like a human driver, a self-driving car executes a cycle of perceiving the ...

  3. PDF AUTONOMOUS VEHICLES

    Stanford professor and head of Google's self-driving car program, wrote: "I envision a future in which our technology is available to everyone, in every car. I envision a future without traffic accidents or congestion."7 Self-driving cars or autonomous vehicles (AVs) use sensors, computers and robotic actuators to

  4. PDF Self-Driving Cars and the Value of Human Life A Thesis Submitted to the

    Important in understanding the ethical issues raised by self-driving cars is first understanding how self-driving cars work. To begin, "autonomous" systems make impor-tant choices about their own actions with little or no human intervention.4 Much in the way an automatic vacuum such as a Roomba cleans on its own, avoiding obstacles in its

  5. Artificial Intelligence in Self-Driving Cars Research and ...

    Abstract. This paper presents a scientometric and bibliometric analysis of research and innovation on self-driving cars. Through an examination of quantitative empirical evidence, we explore the importance of Artificial Intelligence (AI) as machine learning, deep learning and data mining on self-driving car research and development as measured by patents and papers.

  6. (PDF) Self-Driving Vehicles—an Ethical Overview

    Solving the single-vehicle self-driving car trolley problem using risk theory and vehi-cle dynamics. Science and Engineering Ethics, 26, 431-449. de Jong, R. (2020). The retribution-gap and ...

  7. Autonomous vehicles: theoretical and practical challenges

    With self-driving cars, EVs will definitively capture the market. Furthermore, well-developed management strategies including AVs can increase traffic efficiency and thus help to further reduce pollutant emissions (Ding and Rakha, 2002; Soriguera et al. 2017) and energy consumption (Wadud et al. 2016). In fact, automation is already boosting ...

  8. (PDF) Self-driving cars

    PDF | On Nov 1, 2017, Peter Szikora and others published Self-driving cars — The human side | Find, read and cite all the research you need on ResearchGate

  9. (PDF) Decision Making for Autonomous Car Driving using Deep

    Self- driving cars are one of the most prominent research domain in today's worlds. From training cars how to park to run partially autonomously on the road is a long journey. ... This thesis will ...

  10. Ethical Considerations Facing the Regulation of Self-Driving Cars in

    Self-Driving Cars in the United States Richard Mancuso Claremont McKenna College This Open Access Senior Thesis is brought to you by Scholarship@Claremont. It has been accepted for inclusion in this collection by an authorized administrator. For more information, please [email protected]. Recommended Citation

  11. Values of trust in AI in autonomous driving vehicles

    all self-driving cars used in public transportation must have a human driver, known as a safe driver (Thorpe, Herbert, Kanade, & Shafter, 1991). It can be seen from this, whether from the government's point of view or the public's point of view, there is still some way to go in giving the task of driving to AI with complete confidence and trust.

  12. PDF Title: The development of autonomous vehicles

    Figure 3.7; Tesla Self Driving Car Demo Video Analyzed 29 Figure 3.8: How cars learn 31 Figure 4.1: A-U model of product and process innovation 42 Figure 4.2: Roger's diffusion of innovation curve 46 Figure 4.3: Four stages in the bonding continuum 52 Figure 5.1: Industry BM shifts Figure 5.2: The Four Stages of Mobility 60

  13. Design of a Control System for an Autonomous Vehicle Based on Adaptive

    In this paper we describe the intelligent control system designed for an autonomous vehicle in this challenge. We first introduce briefly the system architecture used by Intelligent Pioneer, then describe the control algorithm used to generate every move of the vehicle based on the vehicle's lateral dynamics and adaptive PID control.

  14. From self-driving cars to smart homes: a thesis explores the ...

    From self-driving cars to smart homes: a thesis explores the future of interconnected machines. The paper analyses the challenges to be met by Internet of Things devices and applicationsDevice interconnection is expanding and shows connectivity and operational capacity limitations.

  15. Self-Driving Car Ethics

    Self-driving cars are programmed to be rule-followers, explained Saripalli, but the realities of the road are usually a bit more blurred. In a 2017 accident in Tempe, Ariz., for example, a human-driven car attempted to turn left through three lanes of traffic and collided with a self-driving Uber. While there isn't anything inherently unsafe ...

  16. Semantic segmentation for self-driving cars using deep learning: a

    When speaking about self-driving systems, road scene understanding is a priority in order to make the right decision every moment. For a scene understanding mission to complete, a self-driving car has to know the segment label under which each pixel of the received image signal is classified. This problem is known as "semantic segmentation.

  17. Seven Arguments Against the Autonomous-Vehicle Utopia

    But in a duo of essays in 2017, ... The transportation reporter and self-driving car skeptic Christian Wolmar once asked a self-driving-car security specialist named Tim Mackey to lay out the problem.

  18. Self-Driving Cars: Everything You Need to Know

    Beyond the engineering challenge, 50 sets of state laws (plus the District of Columbia) must adapt to decide safety and liability issues before self-driving cars can become common. The market will also have its say. Volkswagen recently unveiled a concept car that would charge by the mile for self-driving capability.

  19. Self-Driving Car Ethics

    Self-driving cars are programmed to be rule-followers, explained Saripalli, but the realities of the road are usually a bit more blurred. In a 2017 accident in Tempe, Ariz., for example, a human-driven car attempted to turn left through three lanes of traffic and collided with a self-driving Uber. While there isn't anything inherently unsafe ...

  20. Self-driving Cars Argumentative Essay

    The study found that self-driving cars could reduce the number of vehicles on the road by up to 2.5 million. This would free up an estimated 1.9 million hours of travel time and reduce fuel consumption by up to 1.5 billion gallons. In addition, self-driving cars would reduce the number of accidents by up to 80%.

  21. Annotated bibliography Self-driving Cars

    Thesis Statement: Self-driving cars are safer than manually driven cars. Annotated bibliography "Global Driverless/SelfDriving Car Market to Witness Rapid Growth till 2025, Reveals New Market Research Report." M2 Presswire 14 Feb. 2017: n. pag. Print.

  22. Self Driving Thesis

    Self Driving Thesis. The increasing use of computer technology in transportation will provide us with much greater levels of safety and reliability--saving potentially millions of lives, and make personal travel and organizational shipping more cost effective. The fear of placing our faith in the "hands" of automated transportation is unfounded.

  23. Self-driving cars

    Text preview of this essay: This page of the essay has 2,979 words. Download the full version above. Self-driving cars. Ten years ago, something like this would seem completely unbelievable. Now however, it's becoming our reality. A lot of questions come to mind with this subject: how will this affect our lives?