Tech Startups Have an Ethical and Reputational Edge Over Big Tech

Published on September 21, 2021

Earlier this year the Head of Google Research, Jeff Dean, conceded that his employer had taken a “reputational hit” after they fired Timnit Gebru and Margaret Mitchell, the (former) co-leaders of Google’s Ethical AI Team. The backlash continued as more details of the story are revealed. WIRED magazine provided an in-depth look at not only the firings, but also a surrounding culture ridden with (allegations of) racism, sexism, and territorial cliques.

Google is not the only tech company undergoing a loss of trust among its employees and consumers. Facebook and Amazon regularly suffer similar fates. All three are routinely criticized for misappropriating the data they collect and who they share it with, creating “filter bubbles” without their users knowing it, and producing discriminatory AI algorithms. All of this happens against the backdrop of younger generations putting their money where their values are while the CEOs of these companies testify before the U.S. Congress. Indeed, on June 15, 2021, in a rare moment of bipartisanship, the Senate confirmed Lina Khan in a 69-28 vote to lead the Federal Trade Commission. Khan is a leading advocate for greater enforcement of antitrust and consumer protection laws against big tech.

Why can’t big tech solve their ethical problems?

Surely it would be better for them not to play public relations defense every day while bleeding consumer trust and fending off regulatory investigations. Why hasn’t Facebook’s oversight board saved them from avoiding embarrassments instead of creating them? Why did Google’s AI ethics board get dissolved in less than a week after its formation was announced? How did Amazon not know that having their drivers pee in bottles due to a lack of breaks is both ethically and reputationally (not to mention aesthetically) odious?

Two reasons why these companies find it so difficult to be better

The first is an issue of sheer size. Turning around a large ship is difficult, even when the desire is there. Think about how much time and resources have to go into righting the ship: creating a culture and infrastructure where these issues are taken seriously (and so built into product development, deployment, quality assurance, etc.), assigning ethics-related responsibilities to existing and newly created roles, ensuring that financial compensation packages are aligned with the ethical goals of the company, and so on. It’s a big lift.

The second reason is that their respective business models incentivize (if not require) ethical breaches. Facebook, for instance, is driven by its ad revenue, which requires that they collect massive troves of data about their users, resulting in violations of privacy. They also need to keep people on their platform for as long as possible, leading to the kinds of manipulative technologies detailed in the recent documentary, “The Social Dilemma”.

Tech startups can and should punch harder than their behemoth competitors

Startups are small ships. So long as their founders and senior leaders take issues like data privacy and AI ethics seriously, they can transmit that to the team as a whole and build it into their products, which number in the single digits. It’s also easy for them to build their ethical credentials into their marketing campaigns and their sales pitches to their potential (enterprise) clients who need to work with companies they can trust not to mar their own reputations.

Their business models are also far more flexible than those of big tech. For instance, rather than adopt an ad model and all the ethical troubles it entails, a subscription model is a straight cash-for-service transaction and doesn’t require exploiting user data. In fact, it even opens the possibility of financially compensating users for their data, which stands to benefit users and businesses alike. Facebook can’t do that without taking on board a tremendous amount of risk.

Startups are beginning to take advantage of this ethics-first approach

Google has seen competitors touting their ethical credentials vis-a-vis respecting privacy, most notably Duck Duck Go and Neeva. Another example is the recently announced Bizconnect, a global B2B search engine. Google’s practice of having companies bid for sponsored keywords means that massive corporations routinely purchase highly coveted top spots in search results, leaving smaller businesses pushed down the search results page. On Google’s model, the rich get richer in an unfair playing field. On Bizconnect’s no-bid model, everyone pays the same and ranking in search results among sponsored keywords rotates in a carousel fashion, giving every company – large and small alike – an equal opportunity to be first, second, and third in the search results. (Full disclosure: I serve on the advisory board of Bizconnect).

Startups should think about how they can turn the bad news for big tech into good news for themselves, their users and customers, and society as a whole. They should think about how to get their ethical house in order early on, and in a scalable way.

Start by not collecting as much user data as possible

More specifically, they should collect what is needed and not more, and be transparent with users about why their data is being collected.

Sometimes ethical problems only occur at scale. YouTube serving one person a video containing disinformation about the illegitimacy of an election is not so bad. Serving it to hundreds of millions is a big ethical problem. Startups should think about what their brand’s ethical characteristics look like if they’re wildly successful; that will improve not only how they think about their products but also how they think about their business model.

Most importantly, startups should stop thinking of their users as “users”. Instead, they should think about them as people with whom they have a relationship. They should ask themselves, “how can we operate in a way that will justify the people we serve as thinking of us as trustworthy”? That’s not the same thing as ‘how can we cause our users to trust us?”. Part of this should involve talking to people outside their immediate circles about their business model and practices. It’s particularly important that those conversations occur with people who do not stand to benefit if their startup is successful. Financial interests can blind even the best of us, and before you know it, like Google, you quietly drop your commitment to “don’t be evil.”

Reid Blackman, Ph.D., is a contributor to Grit Daily and is among the foremost experts in organizational and AI ethics. He is a member of the Company Advisory Board and is part of the Bizconnect founding team. Dr. Blackman is the Founder and CEO of Virtue, an AI ethics consultancy. He has been a Senior Advisor to Ernst & Young and sat on their Artificial Intelligence Advisory Board. He is a member of IEEE’s Ethically Aligned Design Initiative and the EU AI Alliance. His work has been profiled in The Wall Street Journal and he has contributed multiple pieces to The Harvard Business Review, TechCrunch, and VentureBeat. Prior to founding Virtue, Reid was a professor of philosophy at Colgate University and a Fellow at the Parr Center for Ethics at the University of North Carolina, Chapel Hill.

Read more

More GD News