During the 1930s, with the New Deal kicking off public infrastructure works throughout the United States, a larger-than-life figure rose to prominence in New York City. Robert Moses, an urban planner and Park Commissioner, was largely responsible for the rapid growth of New York City infrastructure in response to the rise of the automobile and the influx of government funding. Moses was responsible for hundreds of public works projects: the bridges of the boroughs of New York, the parkways of Long Island, and playgrounds throughout the city.1
This piece originally appeared in Model, View, Culture Quarterly 3, 2016. It is republished here with full rights.
Anyone well-traveled in New York is familiar with the transit prohibitions on state parkways: trucks and buses—and therefore, most forms of public transit—are barred from using these roads. But Moses took proactive measures to ensure public transit was permanently stymied. He built dozens of bridges with low clearance over the parkways of Long Island. Laws can change, he figured, but bridges are hard to rebuild.2 In doing so, Moses ensured that the Black and Jewish communities of New York would not have easy access parts of Long Island, most notably his beloved Jones Beach. His bridges would become artifacts of oppression, enforcing a social structure that lasts to this day.
The current era of technology has already seen connectivity and big data become the next layer of public infrastructure. Governments and corporations are increasingly reliant on insights extracted algorithmically from large data sets. The scope and scale of the data that affects day-to-day life continues to expand as more and more devices are connected to the internet. The ethical questions of how and when it is appropriate to use that data, and to what ends, are growing ever more complex. Just as Robert Moses weaponized his bridges to leverage public infrastructure against those he found undesirable, so too can IoT data become a weapon of power and privilege. Technologists in the IoT space must weigh innovation against questions of safety, privacy, security, legality, liability, and morality.
It is too often assumed that data serves as an objective form of truth, but this is regularly not the case. In general, we have no guarantee of the accuracy of IoT-enabled sensors nor the validity of their algorithms beyond manufacturers’ marketing claims. And because so many of these devices are developed in high-velocity startup environments, it comes as no surprise that very little verification and validation effort is spent to ensure robustness of the algorithms or accuracy of the collected data. Overreliance and overconfidence in data and algorithmic interpretation can lead directly to user harm.
In 2015, a woman visiting a co-worker in Lancaster, Pennsylvania reported a rape. While investigating the claim, police found her activity tracker; she willingly provided her password to access the data. Investigators determined that the data undermined her claim and charged the woman with misdemeanor false reporting. In April 2016, the woman was put on probation. The prosecuting attorney, proffering no ambiguity as to the value of the data, said that the device “made all the difference,” and that it “sealed the deal for us.”3 These conclusions become more worrisome when it is demonstrated that such activity trackers can extract a heart rate from a piece of raw chicken breast.4 Despite making claims to track heart rate and energy expenditure, the fitness tracker at the center of this case is not classified by the FDA as a medical device. Instead, it is a “general wellness device,”5 and as such need not be proven safe and effective. Although a consumer protection lawsuit has been filed against Fitbit, the suit merely challenges the veracity of their marketing claims and not the actual effectiveness or accuracy of the devices itself.6
The inaccuracies of these devices can have a disproportionate impact on those relying on them for making healthy decisions. Research has shown that the mean absolute percent error for energy expenditure estimates is between 15% and 30% for consumer-grade fitness trackers.7 When combined with the innate inaccuracies in food calorie measurements, reliance on fitness tracker data can undermine choices in diet and exercise, errors which disproportionately affect those relying on the data for making health decisions.8
These inaccuracies can become downright dangerous in a culture that mandates fitness. Oral Roberts University recently mandated that incoming students own and use activity monitors. The university will use these data to evaluate students for their adherence to the university’s physical fitness requirements, a requirement which will have an effect on students’ grades.9 Furthermore, Target recently announce an “opt-in” program for employees who volunteer to wear fitness trackers and provide the company with health data. Employees who volunteer can compete for a sizable donation to a charity of their choice.10 The inaccuracies and unreliability of these devices amplify a power gradient that already leans against the user’s best interest.
As technology is integrated into otherwise unconnected devices, the conventional failure modes of those devices must be re-evaluated with due regard to the added complexity of connectivity and automation. This became clear recently when, for the first time, a fatality resulted from an accident wherein a car was being driven autonomously. A man in Florida died when his Tesla ran under the trailer of a tractor trailer that crossed the road in front of him.11 Tesla’s statement about the incident said, “neither [the] Autopilot nor the driver noticed the white side of the tractor-trailer against a brightly lit sky, so the brake was not applied.” The company notes that it informs users, “the system is new technology and still in a public beta phase.”12 Later, the company referred to the crash as a “statistical inevitability.”13 In how many other industries can a manufacturer shrug their shoulders in response to even a single accidental death and claim, “well, it’s a beta?”
Most early-IoT technologies are entirely unregulated. A car company can ship a beta-version auto-navigation system and, as long as a driver is behind the wheel, there are virtually no laws in place governing the safety of its algorithms. There are no safety laws, and few voluntary standards, governing the use of software in consumer devices. In some cases, device manufacturers can even freely acquire, transmit, store, and sell data that would otherwise be legally protected. For example, the Samsung Family Hub™ is a smart refrigerator that helps its users manage food inventory by taking a photograph of the interior every time the doors are closed. For users who take medication that requires refrigeration, such as the multiple sclerosis drug Copaxone®, the Family Hub™ becomes a device that digitizes and uploads private health information—possibly to parties unknown.
In the United States, such data are protected by HIPAA/HITECH laws, but only if handled by a provider, insurance company, or other covered entity. In this context Samsung is not a HIPAA covered entity, and therefore is not beholden to the law’s data security regulations: there are no enforceable rules for accountability or transparency regarding the storage, handling, and sales of the data it collects. This is noteworthy because Samsung’s IoT platform, SmartThings™, has already been hacked.14 The authors of the paper that exposed the vulnerability in a SmartThings door lock note that such attacks may “expose a household to significant harm—break-ins, theft, misinformation, and vandalism.”15 But even these concerns expose an inherent privilege—crimes against property, not the person. Much of the emergent IoT security work has focused on protecting the interests of early adopters, typically technologists themselves. There is more apparent concern among developers that an IoT coffeemaker might expose a WiFi password than that a buffer overflow might cause its heating element be stuck on, starting a fire that displaces a family. Yet people who worry about their next paycheck or their next meal can rarely afford to prioritize the operational security of their data.
Often times, a lack of transparency and accountability accompanies a lack of consumer protection regulations. Lack of regulation generally enables companies, particularly startups, to move fast and disrupt the status quo. But software fails with alarming regularity, and when internet-connected software is installed on a physical system with failure modes that can lead to injury or death, disrupting the status quo means disrupting the entire ethical framework our society has constructed around holding people accountable for failures. If human error leads to an injury or death or undue financial loss, there are often liability laws and insurance policies to ensure the aggrieved receive restitution for their hardship; but with untraceable algorithmic failures it is not clear who, if anyone, is responsible for helping the victims return to relative normalcy. This lack of liability disproportionately hurts those who are not on the uphill side of financial privilege. If Samsung’s Family Hub™ accidentally leaks someone’s prescription information to their boss, who is to blame when they find themselves on the wrong end of a pink slip?
Building an Empowered, Connected Future What is compelling about the infrastructure framework made possible by the aggregated data of billions of devices, though, is the unprecedented power it has to do good. Technology has long promised that innovation would lead to technological empowerment of the underrepresented and underserved: access to education, employment, and medical care would all change as technology evolved. And although the evolution of technology has helped narrow the gap, companies still argue that there are insufficient data to accomplish these goals. In a self-review of their ethics practices, Facebook argues, “as the company grows, our research agenda expands to include projects that contribute value to our community and society. For instance, our accessibility team develops technologies to make Facebook more inclusive for people with disabilities.”16 Facebook’s recent valuation of $350 billion makes it the sixth largest corporation in America;17 the company does not indicate how much more growth is necessary to make the research agenda of accessibility a core priority.
The slow investment rate in accessible technology is especially frustrating when considering how much of an impact low-cost, standardized access to connected sensing technology can have for people living with disabilities. People with lower-limb amputations, by way of example, are at increased risk of cardiovascular illness;18 smart health monitors, tailored to individual needs, would provide an immediate health benefit. Fine-grained biometric data would be an epidemiological boon in understanding care for many users with disabilities.
Incidentally, medical devices are one of the few IoT-candidate consumer-level technologies where software is strongly regulated. Entering the medical device space means that products have to be demonstrated to be safe and effective, a time-consuming and expensive process. Yet, regulations are often at odds with high-velocity innovation culture. Despite regularly making futuristic claims where technology solves disease and social issues, tech companies at best react fearfully to regulations and at worst flaunt them entirely. When Austin, Texas voted by referendum to impose regulations on ride-sharing services, Uber and Lyft ceased operations in the city permanently, abandoning their drivers overnight.19
Robert Moses designed the footings of the Long Island Expressway to be too weak to support anything but vehicular traffic, forever blocking the development of a light rail system, and forever relegating Long Island a car-centered land. When infrastructure is designed only for the richest, most capable, or most eager users, that infrastructure excludes everyone else. Designing the IoT without considering the needs, lifestyles, backgrounds, and circumstances of a diverse user-base will only serve to widen the digital divide. Until the ethical calculus shows that the moral weight of inclusion is mightier than the earning potential from the privileged, the IoT will avoid serving those who could most benefit from it, in favor of those who can supply it most. Perhaps this is what Facebook meant by “as the company grows.”
The technological infrastructure being built is collecting data on a very narrow segment of society. When this data is used for decision- and policy-making, early adopters are granted an implicit and unequal voice, in addition to the unequal benefits they gain. Because the intersection of early adopters and those with social and political privilege is so large, concerns of overrepresentation are too often swept under the rug. Ironically, this reinforces barriers to technological adoption. This was most evident when Apple shipped its HealthKit without a menstruation tracker.20
Nevertheless, immediate benefits can still be gained through conscientious design and a deliberate effort to provide access to IoT technology. Samsung’s Family Hub™ could provide data used to identify food access issues and address food deserts. Low-cost air-quality monitors could improve environmental quality, leading to better overall public health. Smart appliances could interact with power-generation facilities to reduce load, yielding energy efficiency improvements. And recent fatality notwithstanding, Tesla’s Autopilot feature has resulted in far fewer deaths-per-mile than conventional cars.
In his 1980 essay “Do Artifacts Have Politics?” Langdon Winner wrote:
…the intractable properties of certain kinds of technology are strongly, perhaps unavoidably, linked to particular institutionalized patterns of power and authority. Here, the initial choice about whether or not to adopt something is decisive in regard to its consequences. There are no alternative physical designs or arrangements that would make a significant difference; there are, furthermore, no genuine possibilities for creative intervention by different social systems—capitalist or socialist—that could change the intractability of the entity or significantly alter the quality of its political effects…. In our times people are often willing to make drastic changes in the way they live to accord with technological innovation at the same time they would resist similar kinds of changes justified on political grounds.
The Internet of Things is showing us that we now have the computing power, connectivity, and data to create a technological infrastructure with the power to change how people live and how communities evolve. Society will be changed by the technologist’s actions; the technologist’s ethos must therefore include an assurance that their creations will not induce a change in one segment of society at the expense of another. We can’t allow the bridges of the Internet of Things to be built too low.
The Power Broker, Robert Caro, 1975. ↩︎
[http://gothamist.com/2016/02/17/robert_caro_author_interview.php](“Robert Caro Wonders What New York is Going to Become,”) Christopher Robbins, Gothamist, February 17, 2016. ↩︎
“‘TODAY’ airs segment on fake rape report foiled by Fitbit in Lancaster County,” Tom Knapp, Lancaster Online, April 19 2016. ↩︎
Emily Gorcenski, via Twitter. https://twitter.com/EmilyGorcenski/status/692003645437677568 ↩︎
“General Wellness: Policy for Low Risk Devices,” FDA, January 20 2015. ↩︎
“Fitbit’s accuracy questioned in lawsuit,” Jen Christensen, CNN, May 20 2016. ↩︎
“Comparison of Consumer and Research Monitors Under Semistructured Settings,” Bai, Yang, et al. 1, s.l. : Medicine & Science in Sports & Exercise, 2016 Vol. 48. ↩︎
“Science Reveals Why Calorie Counts Are All Wrong,” Rob Dunn, Scientific American, September 1 2013. ↩︎
“Oral Roberts University to Track Students’ Fitness Through Fitbits,” Elizabeth Chuck, NBC, February 3 2016. ↩︎
“Fitbit guns for the workplace as it achieves HIPAA compliance,” Valentina Palladino, Ars Technica, September 17 2015. ↩︎
“Tesla’s Autopilot being investigated by the government following fatal crash,” Jonathan M Gitlin, Ars Technica, June 30 2016. ↩︎
“A Tragic Loss,” Tesla Motors, June 30 2016. ↩︎
“Misfortune,” Tesla Motors, July 6 2016. ↩︎
“Samsung Smart Home flaws let hackers make keys to front door,” Dan Goodin, Ars Technica, May 2 2016. ↩︎
“Security Analysis of Emerging Smart Home Applications,” Earlence Fernandes, Jaeyeon Jung, and Atul Prakash, 2016. Proceedings of 37th IEEE Symposium on Security and Privacy. ↩︎
“Evolving the IRB: Building Robust Review for Industry Research,” Molly Jackman, Lauri Kanerva. 3, s.l. : Wash. & Lee L. Rev. Online, 2016, Vol. 72. ↩︎
“Why Facebook could one day be worth $1 trillion,” Paul R. La Monica, CNN Money, April 28 2016. ↩︎
“Why traumatic leg amputees are at increased risk for cardiovascular diseases,” J.E. Naschitz, R. Lenger. s.l. : QJM, 2008, Vol. 101. ↩︎
“Austin drivers in the lurch after Uber, Lyft exit,” Sara Ashley O’Brien, CNN Money, May 10, 2016. ↩︎
“Apple promised an expansive health app, so why can’t I track menstruation?” Arielle Duhaime-Ross, The Verge, September 24 2015. ↩︎
Posted: 03.11.2019
Built: 03.12.2024
Updated: 06.08.2023
Hash: 11b02f5
Words: 2647
Estimated Reading Time: 14 minutes