Previously published on The Hill.
As the tech industry faces continued backlash over the impact of what it puts out in the world, we need a better way to balance innovation with civic responsibility. In this negotiation, both the industry and policymakers need to give a little ground.
I won’t recount the litany of concerns being raised about tech right now, but Twitter’s impact on public discourse, Facebook’s use to sway elections and the impact of the gig economy on job security are just a few — not to mention the looming fears of AI.
No one wants to curtail innovation, which has a huge positive impact in areas like healthcare, the economy and overall quality of life. But we need a way to bring ambitious technologies to market that doesn’t harm or alienate those they are supposed to serve. It’s irresponsible and invites regulation.
To figure this out, it’s instructive to look at electric scooters, which went from novelty to a global phenomenon in a matter of months. To some they are a solution to vehicle pollution and congested traffic; to others they’re an irritating menace. Certainly, a lot of people ride them, and investors have quickly poured more than $800 million into Lime and Bird alone.
Transportation is a tricky area to innovate. Without dedicated lanes, e-scooters must coexist with pedestrians and vehicles even as we figure out if they’re actually safe. When regulators raise legitimate concerns, the technorati should not be so quick to scoff at them, as some do.
But local governments should also be more progressive. We’re in urgent need of new, more efficient modes of transport, and policymakers would serve their constituents better if they found ways to accommodate new technologies instead of blocking them outright based on antiquated rules.
For scooters, one suggestion is to conduct controlled experiments to quickly understand the impact. Los Angeles and London are vast cities; it should be possible to designate areas with wide streets and sufficient commuters to conduct a meaningful pilot. Collecting data around agreed upon metrics would highlight problems and tell us whether we should proceed more broadly.
Other types of innovation are harder to gauge. It’s relatively easy to measure the impact of a physical public service, but “virtual” technologies such as AI and social media are harder to assess until they’ve been widely adopted, by which time any harm may have been done. However, transparency in data can be useful here, too.
A big problem with services like Facebook and Twitter is their inscrutable nature. It’s hard for end users to know where a photo or a news item originates, and by nature most people still trust too much of what they see. If Twitter were to tell me there’s a 60 percent chance that what I’m looking at was produced by a Russian troll farm, or that 20 percent of users consider a news source to be trustworthy, I could start to make informed decisions about what I consume.
More and more of these virtual innovations are driven by opaque algorithms. They determine insurance quotes, credit card offers, and, frighteningly, even prison sentences. As innovations like this permeate daily life, being transparent about how data is being used is critical to deciding if services are fair and deserving of our trust.
Ultimately, the value of an innovation comes down to its utility and disutility. What benefit does this innovation have for me? What is the cost of that benefit to others? And what are the broader impacts on the economy, on society or democracy?
These are hard questions to answer. We need to innovate responsibly, accommodating inventions that seem worthy of our trust, and proceeding carefully with those that don’t. That requires the tech industry and policy makers to be more understanding of one other. The price of failure may be freedom to innovate, and that’s something we can’t afford to give up.