Automated Vehicles: Strict Products Liability, Negligence Liability and Proliferation

By: Steven Wittenberg

The proliferation of automated vehicles (sometimes called “self-driving cars”[1] or “autonomous cars”[2]) is poised to make American roads safer by reducing or even eliminating human error, which is the leading cause of collisions. In 2008, the National Highway Traffic Safety Administration (NHTSA) reported that 40 percent of crashes occur because of “recognition error,” which includes “inadequate surveillance” and “internal distraction,” while 35 percent of crashes arise from “decision error,” which includes speeding and misjudgments.[3] Automated vehicles can increase driver safety by removing driver error from the situation.[4]

California, Nevada, Michigan, Florida, and D.C. are the only states which have pioneered legislation regulating automated vehicles on public roads.[5] Virginia has dedicated 70 miles of a highway for public road testing.[6] To provide some background, the California statute requires drivers of automated vehicles to obtain a special license.[7] Additionally, the vehicles must have a way to “disengage the autonomous technology that is easily accessible to the operator.”[8] If the technology fails, the driver must take control or the car will initiate a complete stop.[9] Further, each vehicle is required to record “autonomous technology sensor data” thirty seconds before collisions that is retained for three years.[10] The statute also requires five million dollars of insurance for those conducting public road testing of automated vehicles.[11] Moreover, the statute bestows NHTSA regulations with superseding authority over California state provisions.[12]

Undeniably, defective autonomous technology will cause accidents for users of automated vehicles, which unfastens the question of who should be held liable. [13] The legal framework for accident liability is bifurcated into strict products liability and negligence theory.[14] Strict products liability may place fault solely on the manufacturer and will lead to increased consumer cost, but will produce higher demand. Strict products liability benefits plaintiffs because the burden of proof is relatively relaxed. Negligence liability, on the other hand, will create a more refined system of comparative fault and will present a cheaper price per unit for consumers, but it may unduly deter prospective buyers of the nascent technology. The proof required for negligence is much greater than strict products liability. Both theories of recovery will run into causation issues because other drivers will likely be the ones crashing into the automated vehicles due to their own fault; therefore, the automated technology will often not be the cause in fact of the accident. In the beginning, it may be necessary for manufacturers to promise they will assume liability for accidents that arise during autonomous mode, regardless of who caused the accident, to achieve strong initial product growth. Strict products liability’s ability to assuage consumers’ fear of liability outweighs the likely modest benefit of a reduced cost per unit gained in negligence theory. Therefore, the best legal theory for recovery is strict products liability because it assures risk averse consumers they will not be held liable when defective technology causes an accident, which will ultimately increase road safety as more automated vehicles hit the road.


Strict products liability is the most efficient way to allocate liability for potential collisions caused by automated vehicles. Under this theory, there need not be any blameworthy state of mind or negligence on behalf of the manufacturer for liability to attach.[15] Rather, strict products liability merely requires that (1) the product was defective when it left the automated vehicle manufacturer’s control, (2) it was unreasonably dangerous, and (3) the defect was the actual and proximate cause of the injuries.[16] Potential cases of unreasonably dangerous defects by the manufacturer could include accidently shipping prototype software instead of “market-ready version[s]”[17] and failed manual override implements, which may result in preventing the driver from taking over the steering wheel or using the brakes. In addition, the technology may be too cautious and could lead to accidents by failing to take necessary risks to avert harmful contact with other vehicles or obstacles (i.e. evasive maneuvering).[18]

The policy implications of a strict products liability regime for accidents arising from automated vehicle defects are mixed. It would benefit consumers because it would pressure manufacturers of automated vehicles to sell fewer defective cars. Moreover, it would enable courts to resolve conflicts with relatively little administrative cost because there does not need to be any evidence of misconduct as “[t]he production and marketing of a defective product” is, itself, the evil act.[19] The legal cost for plaintiffs would also be relatively low because it would require less attorney time because the prima facie case is easier to fulfill. Additionally, the discovery process for evidence would be more straightforward and would involve fewer countermotions. Further, malfunction theory affords the inference that a defect exists, provided there are no other possible causes or evidence of abnormal use.[20] In terms of principle, caveat emptor (“let the buyer beware”) is obsolete in an age of high technology and industry, especially for products claiming to be fully autonomous like the automated vehicle. It is argued the cost should be absorbed by the manufacturers because they are in the best position to avoid defective products.[21]

On the other hand, in instances of comparative fault of multiple actors, strict products liability is less flexible because there are only three affirmative defense for the manufacturer, which are the plaintiff’s (1) misuse, (2) unreasonable assumption of risk, and (3) unreasonable failure to discover or foresee dangers.[22] Furthermore, the increased cost of ensuring vehicle safety might be passed on to consumers, which may bring about excessive deterrence, although there is little precedent for safety features increasing cost.[23] However, the safety features involved with autonomous technology are high tech software and digital hardware, not simple seatbelts and airbags, which are plausibly more costly to produce. Likely, however, any increased cost will not stifle the development of automated cars because there has already been significant investment in the product. Moreover, manufacturers are already offering to compensate for damages caused by defective technology.[24]


An alternative to strict products liability recovery is to categorize driving an automated vehicle as an “abnormally dangerous activit[y]” (ADA).[25] This alternative would place liability with the driver of the automated vehicle for choosing to pursue the activity. ADA liability takes into consideration the following factors: (1) the risk of great harm; (2) the “inability to eliminate the risk by exercise of reasonable care;”[26] (3) the uncommonness of the activity; (4) the unsuitability for the locale; and (5) the social value of the activity.[27] The best potential cases of an ADA liability could involve driving an automated vehicle in a location where the manufacturer did not anticipate the driver to traverse or perhaps in a dangerous environment where the manufacturer instructed the driver not to travel (e.g. during severe weather). Both scenarios could subject the driver and others to high risks of great harm. However, both scenarios fail to the second factor of the ADA test because the risks could be eliminated by exercising reasonable care by not driving in those dangerous settings.

The activity of driving an automated vehicle is not an ADA. First, although the activity of driving an automated vehicle may have a higher risk of harm than that of driving a normal vehicle, the probability of harm is likely not high enough to be sufficient for ADA liability. [28] Further, on public roads and places where autonomous vehicles are likely to be found, the harm risked is not of great magnitude. For example, automated vehicles have been shown to create less severe injuries compared to ordinary vehicles.[29] Second, the causes of these accidents will very likely be caused by other drivers through no fault of the automated technology.[30] Thus, drivers of automated vehicles will not be able to eliminate the risk through careful operation if other drivers are the cause of their accidents, therefore the second factor is fulfilled.[31] Third, as the product is new, the activity of driving an automated vehicle is uncommon. However the activity of driving a vehicle is not uncommon, and the risks associated with driving an automated vehicle will likely not differ greatly from driving a conventional vehicle, thus driving an automated vehicle is likely not uncommon. Fourth, driving automated vehicles on public roads suits the locale because “the only place where the activity can be carried on must necessarily be regarded as the appropriate one.”[32] Fifth, the social value to the community is high because the states that have allowed automated vehicle benefit from jobs, tax revenue, and prestige as a leader in technology.[33] In sum, the activity of driving an automated vehicle is not an ADA because there is no high risk of great harm, it suits the locale, it is relatively common, and it conveys sufficient social value, although it satisfies the second factor that the risk cannot be eliminated by exercising reasonable care.


Negligence theory, albeit less efficient than strict products liability recovery due to heightened burden of proof requirements, benefits from a more fair distribution of liability between blameworthy parties. Negligence is established when there is (1) an act or failure to act that falls below the standard of due care (i.e. a breach), which (2) actually and proximately causes an injury to an individual to whom (3) a duty is owed.[34] In the context of automated vehicles, manufacturers owe a duty to use reasonable care in the design of their automated vehicles to avoid unreasonable risks of injury and to minimize injuries in the event of an accident.[35] Moreover, there is a duty to build cars without “latent or hidden defects,”[36] which would include defective automated technology. An example of a breach of that duty would be simply failing to warn of or make safe defects or hazards in the automated vehicle. Actual causation requires the defect to be the cause in fact of the accident, while proximate causation limits injuries to “those physical harms that result from the risks that made the actor’s conduct tortious.”[37] Negligence liability may be established as negligence per se or as evidence of negligence if a statute or regulation is violated.[38] For negligence per se to be used, the statute or regulation must be (1) intended to protect a specific class of plaintiffs of which the plaintiff is relevant member and (2) designed to prevent the type of injuries that the plaintiff sustained.[39] For example, software defects that prevent collecting sensor data thirty seconds before collisions violates the California statute.[40] However, this specific provision is not designed to prevent collisions, rather, it is designed to ensure data is recorded to determine how the accident unfolded and to prevent future accidents. Therefore, the sensory data collection provision cannot be used for negligence per se recovery for collisions.

The doctrine of comparative negligence allows a more fair distribution of fault between causal actors. Shares of responsibility are assigned in percentages to those with legal responsibility, including the negligence of the driver of the automated vehicle. [41] To illustrate, before an accident occurs, the driver of an automated vehicle might be negligent by failing to carefully watch the road, failing to take control of the steering wheel, or failing to apply the brakes. Alternatively, he or she might fail to perceive a warning that the automated technology is currently defective. The defendant manufacturer bears the burden to prove the plaintiff driver was also negligent.[42] If the factfinder finds the driver is a legal cause of the accident, then the responsibility must be apportioned between the driver and other negligent actors.[43] Fault is assigned based on the individual’s “awareness or indifference with respect to the risks created . . . and any intent with respect to the harm caused . . . and the strength of the causal connection between the person’s risk-creating conduct and the harm.”[44] Therefore, the defendant, or other drivers, would need to show the driver of the automated vehicle had some awareness of the risks.

Negligence recovery allows apportionment of fault between parties, including those who could have avoided the accident in a cost-efficient manner. For instance, the manufacturer can economically include a warning system alerting the driver when the automated technology goes offline. Also, the driver can easily assume control when things go awry while the automated technology does the bulk of the navigating and driving. Historically, comparative fault was not available and plaintiffs were barred recovery if they were negligent in any amount through the doctrine of contributory negligence.[45] Today, however, comparative negligence is welcomed as a more efficient and evenhanded theory as it can “lead to an improvement in economic welfare” because it allows sharing or apportioning of damages.[46] Comparative negligence is more efficient because it effectuates the goal of deterrence and punishment with greater specificity, while still compensating the plaintiff with what he or she is owed. Although it may create a fairer and more efficient system of compensation, the legal costs of a negligence regime are higher than strict products liability. Parties, notably optimistic defendants who hope to only be partially liable instead of liable for the whole of the damages, may be encouraged to take their chances in court instead of coming to a timely and efficient settlement, which raises attorney fees for all relevant parties. Further, more court cases increase administrative costs because they burden the court system with complex issues. In addition, a court outcome might produce an unfair apportionment between defendants because juries and jurists lack perfect knowledge. Although the legal costs are amplified in a negligence regime, the cost per automated vehicle will be lower than a strict products liability regime; manufacturers will enjoy a natural buffer against liability because of the more rigorous negligence test. Roads will become safer as a result of more consumers driving automated vehicles because of the reduced cost of negligence liability on manufacturers.


The plaintiff must display evidence that the defective automated technology was the actual and proximate cause of the accident to recover.[47] For strict products liability, the defect must have proximately caused the harm in a “reasonably foreseeable” way. Courts have determined that automobile accidents are a reasonably foreseeable consequence of defective automated technology.[48] Manufacturers must necessarily “contemplate [their products’] travel on crowded and high speed roads and highways that inevitably subject it to the foreseeable hazards of collisions and impacts.”[49] Thus, establishing proximate causation will not be an issue where defective automated vehicles will foreseeably be involved in accidents (i.e. in their everyday use).

The chief issue of automated technology will be proving actual causation. For strict products liability, the evidence must show the defect was “more likely than not” the actual cause of the harm.[50] Other drivers will likely be the ones crashing into automated vehicles. For example, all of Google’s eleven accidents have not been caused by automated technology, but rather by human error.[51][52] Foreseeably, in most cases, the manufacturer will not be held liable because it likely would not have caused the accident, but instead would merely have created one of the instruments involved in the accident. In a rear end collision, for example, but-for the automated technology, the accident would still have resulted.

In October of 2015, Volvo’s CEO announced his company would “accept full liability whenever one of its cars is in autonomous mode.”[53] This offer goes beyond traditional legal methods of recovery because it does not matter if the automated technology was the cause in fact. The covenant is likely designed to attract potential customers by conveying not only confidence about the vehicle’s safety, but also assurances there will be no future out-of-pocket legal or compensatory costs from accidents coming from the automated technology. A potential inefficient outcome is that a Volvo automated vehicle driver could be double-compensated by Volvo for an accident because of its covenant and also by the other driver(s) and defendant(s) (possibly including the manufacturer, again) who caused the accident through a judicial decision. Still, Volvo’s strict manufacturer liability pledge might be the jump start needed to properly incentivize buyers.


An affirmative defense to liability is provided if the plaintiff-driver assumed the risks associated with driving an automated vehicle. Under this theory, by proceeding with the activity of driving the vehicle in autonomous mode, the driver “manifests willingness to accept [the risk],” and is barred recovery.[54] However, the assumption of risk must not be contrary to public policy.[55] It is possible assumptions of risks arising from defective automated technology violate public policy because it puts other drivers on the road at risk. Also, it may be deemed unfair and harsh to preclude recovery to poorer classes of drivers who seek a cheaper bargain for automated cars by contracting away their rights. However, there is some precedent for manufacturers to create a valid release to limit their liability from collisions because car accidents are foreseeable.[56] Conversely, if accidents resulting from defective automated technology are considered unforeseeable, then it assumptions of risks and releases may be void.


The main public policy goal of the proliferation of automated vehicles, notwithstanding the economic benefits to the municipality and state, is the improvement of vehicle and road safety. To that end, the more automated vehicles on roads, the better our collective safety. In addition to saving lives, increased automobile safety has a positive financial impact. It is predicted that a reduction in automobile-related deaths could save over $400 billion each year.[57] The primary question, then, is whether negligence or strict products liability will lure in more consumers to buy automated vehicles. Strict products liability will likely enhance proliferation more than negligence liability because it provides greater consumer security.

Consumers are risk averse and seek security when purchasing new technologies like the automated vehicle. Naturally, potential drivers will be more amenable to automated vehicles if they have assurances the manufacturer will be held strictly liable. The legal costs involved with strict products liability are significantly lower than with negligence because a breach of duty does not need to be established. Moreover, settlements will be achieved earlier because the burden of proof is met more easily than negligence. Although negligence provides a more equitable and fair regime of recovery, the positive social utility of enhanced road safety by the proliferation of automated vehicles through a strict products liability regime presents greater social value.

Volvo’s promise to bear liability for accidents involving their automated technology indicates that strict products liability, at least initially, may be the preferred route to overcome consumer risk aversion. Toyota’s national manager, John Hanson, suggested that consumer trust in automated vehicles is essential to their proliferation.[58] Trust can be developed by a broad assumption by the manufacturer to bear all costs caused by the automated technology. Negligence recovery may not provide the necessary legal safeguards to confer trust to new consumers of automated vehicles. Additionally, because some manufacturers are offering to assume liability for accidents caused by automated technology, manufacturers who do not make such an offer will likely sell fewer automated vehicles.

According to a 2014 online survey of 782 individuals, the top reasons for buying a new car are (1) reliability, (2) price, (3) running costs, (4) fuel efficiency, and (5) safety rating and features.[59] Although safety may be a strong public policy goal of cars, it is not the top goal for consumers. One solution is a marketing campaign designed to make driving an automated vehicle into a symbol of enhanced safety. Consumers might brand themselves in their communities with identities subscribing to a lower automobile fatality rate through their automated vehicle. An analogy is driving a “green” environmentally-friendly vehicle (e.g. a Toyota Prius), which functions as a message to others that the driver is environmentally conscious. Additionally, if studies can depict automated vehicles as being extraordinarily safe, it may be a strong enough marketing tool to have a greater influence in consumers’ purchasing decision.[60] For example, even though other drivers might crash into an automated vehicle, automated technology can mitigate the severity of harm by making split-second decisions faster and smarter than human drivers. Highlighting such a safety feature may attract enough consumer attention to boost automated vehicle sales. The consulting firm Booz Allen suggests that “own[ing] the coming transformation,” is an important factor for a successful automaker, so perhaps good advertising may be enough. [61] A strong ad campaign can create the impression that the manufacturer is ahead of the curve. Further, automated vehicles may have a strong opportunity for robust initial growth because they are novel, and consumers likely believe novel cars as reliable and efficient (the first and fourth factors, respectively, in the aforementioned study).[62] Through competitive pricing and increased consumer acceptance, sales of automated vehicles should increase and costs should decrease.[63]


It is forecasted that by the year 2020, there will be ten million automated vehicles on the road.[64] Although more automated vehicles on the road will likely increase traffic safety, accidents involving such vehicles are guaranteed to occur. Regardless of whether strict products liability or negligence is used to determine how fault should attach after an accident, it may be necessary for manufacturers to initially covenant that they will compensate for all damages caused by their technology to be competitive with companies like Volvo. Over time, the covenant to assume damages may no longer be required as consumer trust in the product grows. However, at some point a case will be brought against the manufacturer of the automated vehicle and the court will be forced to consider the strict products liability and negligence regimes. Both theories are workable for plaintiffs to recover for damages caused by automated technology defects.

Strict products liability removes the need for the driver to prove the manufacturer acted negligently in the production of the defective autonomous vehicle. It adheres to modern principles that highly technical products like autonomous vehicles should be free of defects. Although it may unfair to hold manufacturers liable, strict products liability encourages manufacturers to have superior quality assurance and control standards, which benefits the public. Moreover, it will be easier to administrate with fewer discovery problems and fewer countermotions, and will reduce legal fees for plaintiffs and defendants alike. Nevertheless, strict products liability could increase the cost of automated vehicles because manufacturers will need to absorb more liability, which they might pass on to consumers in the cost per unit.

Negligence theory delivers a fairer system of damages by attempting to provide for the most efficient outcome. The goal of negligence theory is to deter and punish the right actors with the right amount of damages, while fully compensating the plaintiff. It requires a higher showing of proof, which is harder to administrate, and increases legal fees. However, it will likely decrease the cost of each vehicle because manufacturers can budget for lower legal liability, which should increase sales of automated vehicles, which in turn will increase road safety as they proliferate.

A strict products liability regime for defective automated vehicles is ideal because it is more plaintiff-friendly than negligence as the burden of proof is easier to show and will more quickly resolve legal issues for plaintiff drivers. Strict products liability will encourage risk averse consumers to buy automated vehicles, therefore, roads will become safer.[65] Negligence, on the other hand, may unduly deter potential consumers of automated vehicles; despite the lower cost per unit on account of the manufacturer’s lower legal burden, the risk of liability for drivers may appear excessive. Undeniably, automated vehicles do not have the benefit of decades of testing and defects are highly plausible. Strict products liability’s capacity to assuage consumers’ fear of liability outweighs the likely mild benefit of a reduced cost per unit granted in negligence theory.

[1] Google Self-Driving Car Project, (last visited Jan. 5, 2016).

[2] Tom Vanderbilt, Let the Robot Drive: the Autonomous Car of the Future is Here, Wired (Jan. 20, 2012, 3:24 PM),

[3] National Highway Traffic Safety Administration, National Motor Vehicle Crash Causation Survey: Report to Congress, (July 2008).

[4] See James M. Anderson et al., Autonomous Vehicle Technology: A Guide for Policy Makers, RAND Corporation (2014),

[5] Cal. Veh. Code § 38750 (West 2015); Mich. Comp. Laws Ann. § 257.665 (West 2014); Nev. Rev. Stat. Ann. § 482A.100 (West 2012); Fla. Stat. Ann. § 316.86 (West 2014); D.C. Code Ann. § 50-2352 (West 2013).

[6] Mariella Moon, Virginia Opens Up 70 Miles of Highway for Driverless Car Testing, Engadget (Jun. 3, 2015),

[7] § 38750.

[8] Id.

[9] Id.

[10] Id.

[11] Id.

[12] Id.

[13] The term “accident” is used loosely to mean “unintended collision” or any instance where an autonomous vehicle is damaged or causes damage without purpose to do so.

[14] See also John Villasenor, Products Liability and Driverless Cars: Issues and Guiding Principles for Legislation, Brookings Institution (April 2014), (discussing also misrepresentation and breach of warranty as theories of recovery for automated vehicle accidents).

[15] 63 Am. Jur. 2D Products Liability § 519 (2015).

[16] 72A C.J.S. Products Liability § 35 (2015).

[17] See Brookings, supra note 6.

[18] See, e.g., Mark Lelinwalla, Google’s Driverless Cars are Too Safe at this Point, Tech Times (Sept. 1, 2015, 1:20 PM), (explaining Google’s self-driving car decelerated to let a pedestrian pass a crosswalk and was hit by another car in the rear).

[19] Am. Jur., supra note 14.

[20] 72A C.J.S. Products Liability § 230 (2015).

[21] See Kimco Dev. Corp. v. Michael D’s Carpet Outlets, 637 A.2d 603, 606 (Pa. 1993).

[22] Victor E. Schwartz, Strict Liability and Comparative Negligence, 42 Tenn. L. Rev. 171, 172 (1974-75).

[23] Compare Daniel Sperling et al., The Price of Regulation, Access (Fall 2014), (“Regulatory actions have not distorted or perturbed automotive markets and industry structure much over the last few decades.”) with Robert W. Crandall et al., The Cost of Automobile Safety and Emissions Regulation to the Consumer: Some Preliminary Results, Carnegie Mellon University Research Showcase (1982), (“Manufacturers have managed to incorporate safety improvements into cars relatively inexpensively (with exception to high-impact bumpers), whereas emissions constraints have required costly changes in auto manufacture.”).

[24] Michael Ballaban, Mercedes, Google, Volvo to Accept Liability when their Autonomous Cars Screw Up, Jalopnik (Oct. 7, 2015, 11:47 AM), (introducing Volvo, Mercedes and Google as automakers who will accept liability for autonomous mode accidents).

[25] Restatement (Second) of Torts: Abnormally Dangerous Activities § 520 (1977).

[26] The “inability to eliminate the risk by exercise of reasonable care” is the most important factor in categorizing something as an abnormally dangerous activity, and if the risk could have been eliminated with due care, then negligence theory must be used as the theory for recovery.

[27] Id.

[28] See Brandon Schoettle & Michael Sivak, A Preliminary Analysis of Real-World Crashes Involving Self-Driving Vehicles, University of Michigan Transportation Research Institute (Oct. 2015), (revealing automated vehicles may not be safer currently).

[29] Id.

[30] Id. (“[S]elf-driving vehicles were not at fault in any crashes they were involved in.”)

[31] Id.

[32] Restatement, supra note 26, (italics added).

[33] Thad Moore, As Self-Driving Cars Come to More States, Regulators Take a Back Seat, The Washington Post (Aug. 29, 2015), (“Virginia is one of a handful of states seeking to attract the potentially lucrative business of developing self-driving cars. And along with a few other states, its lawmakers and regulators are inclined to welcome the industry — and get out of the way.”).

[34] 65 C.J.S. Elements of Actionable Negligence § 20 (2015).

[35] Larsen v. Gen. Motors Corp., 391 F.2d 495, 504 (8th Cir. 1968).

[36] Id. at 503.

[37] Thompson v. Kaczinski, 774 N.W.2d 829, 838 (Iowa 2009).

[38] Restatement (Second) of Torts: Effect of Violation § 288B (1965).

[39] Craig v. Driscoll, 781 A.2d 440, 452 (Conn. 2001).

[40] § 38750.

[41] Restatement (Third) of Torts: Apportionment Liability § 8 (2000).

[42] See id. § 4.

[43] Id.

[44] See id. § 8.

[45] See, e.g., Butterfield v. Forrester, 103 Eng. Rep. 926 (K.B. 1809).

[46] Daniel L. Rubinfeld, The Efficiency of Comparative Negligence, 16 Journal of Legal Studies 375, 392 (Jun. 1987).

[47] 72A C.J.S. Products Liability § 35.

[48] Cronin v. J.B.E. Olson Corp., 501 P.2d 1153, 1157 (Cal. 1972) (“Although a collision may not be the ‘normal’ or intended use of a motor vehicle, vehicle manufacturers must take accidents into consideration as reasonably foreseeable occurrences involving their products.”).

[49] Larsen v. Gen. Motors Corp., 391 F.2d 495, 504 (8th Cir. 1968).

[50] 72A C.J.S. Products Liability § 230.

[51] Jerry Hirsch & Joseph Serna, Humans at Fault in Self-Driving Car Crashes, Los Angeles Times (May 12, 2015, 5:00 AM),

[52] Cf. Ballaban, supra note 24, (“[T]hat doesn’t mean it’s a physical impossibility that a self-driving car could ever be at fault, in a universe full of whimsical happenings.”).

[53] Jim Gorzelany, Volvo will Accept Liability for its Self-Driving Cars, Forbes (Oct. 9, 2015, 11:38 AM),

[54] Restatement (Second) of Torts § 496C (1965).

[55] Id. § 496B.

[56] See 61 Patricia C. Kussmann, Annotation, Validity, Construction, and Effect of Agreement Exempting Operator of Fitness or Health Club or Gym from Liability for Personal Injury or Death of Patron, 61 A.L.R. 6th 147 (2011) (“Because injuries associated with physical training and exercise are neither unforeseeable nor unexpected, and because of the potentially large financial exposure associated with the inevitable injuries, gyms and health clubs generally require patrons to sign a release . . . . [T]he agreement was not void as a matter of public policy.”).

[57] Lauren Keating, The Driverless Car Debate: How Safe are Autonomous Vehicles?, Tech Times (Jul. 28, 2015, 9:00 AM),

[58] Avoiding Crashes with Self-Driving Cars, Consumer Reports (Feb. 2014),

[59] Motorists Rank their Top Factors in Choosing a New Car, The Telegraph (Oct. 1, 2014, 1:10 PM),

[60] See Clifford Winston & Fred Mannering, Consumer Demand for Automobile Safety, 74 American Economic Review 316, 319 (May 1984) (“[I]f the benefits from safety devices are more firmly established and more widely known, then there will be a greater likelihood that they will be actually realized.”).

[61] The Connected Vehicle Movement, Booz Allen Hamilton,

[62] Julia Pyper, Self-Driving Cars could Cut Greenhouse Gas Pollution, Scientific American (Sept. 15, 2014), (“[S]o-called intelligent transportation systems (ITS) could achieve a 2 to 4 percent reduction in oil consumption and related greenhouse gas emissions each year over the next 10 years as these technologies percolate into the market.”).

[63] Consumer Reports, supra note 58.

[64] John Greenough, 10 Million Self-Driving Cars will be on the Road by 2020, Business Insider (Jul. 29, 2015, 9:00 AM),

[65] Negligence per se may provide an equally plaintiff-friendly regime.