Baking Technology Ethics into Your Digital Transformation

The 4 Cornerstones of Digital Ethics

Add bookmark

Digital technology has changed our society in incalculable ways. From cell phones to social media to email, our entire lives are now shaped by technology in ways that we may not even be completely aware of.

However, the more digital technology permeates our world, the more concerned people are becoming about its negative effects. Worker displacement, data privacy, the rise of misinformation, the environmental crisis and the global mental health emergency can all be, at least in part, attributed to the increasing prevalence of technology across all facets of society. 

While many of these societal ills are most often associated with popular social media platforms, gaming systems and mobile applications, the truth is that even the most benign-seeming technology can, whether intentionally or not, easily be weaponized to cause harm. Furthermore, digital transformation success hinges on stakeholder trust. If users and customers don’t trust your organization or the technology you leverage, your digital transformation, and your business along with it, will likely fail. 

And we’re not alone in thinking this. According to recent research conducted by Deloitte, 57% of respondents from “digitally maturing” organizations say their leaders spend adequate time thinking about and communicating digital initiatives’ societal impact. 

Furthermore, in addition to outcomes, ethics frameworks should also consider data sources, methods of computation, technology use, safety/operational risk and assumptions in automated decision making. 

 

Beyond Data Protection and Privacy

It goes without being said, ethical business practices start with compliance. However, when it comes to data protection and privacy, ethical data usage is more than just a regulatory obligation, it’s a strategic imperative. Afterall, data-driven applications and automations are only as good as the data they ingest. 

With this in mind, forward-thinking organizations are developing and implementing comprehensive data ethics guidelines to help ensure the digital technology and AI does not cause unintentional harm. For example:

 

Workplace Impact

One of the biggest concerns surrounding intelligent automation and digital transformation is that new technology will displace human workers. Truth be told, this fear is not unfounded.

According to Forrester, automation will replace 12 million jobs in the US by 2025. In addition, automation has been linked to decreased wages, economic stagnation and adverse mental effects

Case in point, as we outlined in a previous piece about workplace burnout, 45% of U.S. workers say that the technology they use at work does not make their job easier and are in fact very frustrated with it. 

The time has come for organizations to assess digital technology not only for the value it brings shareholders, but for its potential impact on its human workforce. At the heart of this endeavor lies IT/business alignment. By working closely with business units to ensure new digital investments drive both business objectives and employee experience, IT can increase adoption rates and the chances of overall success. 

 

Environmental Impact

There’s no doubt about it. The proliferation of digital technology is exacerbating many if not all of the world’s most urgent environmental crises. From the disastrous environmental impact of rare metal mining to the staggering amounts of energy a single AI model consumes, digital technology of all kinds comes with substantial environmental costs.

Though calculating the environmental impact of digital technology can be incredibly difficult and complex, organizations and researchers are starting to do just that. Large tech companies such as Apple, Meta and Google have all made ambitious pledges regarding reducing their carbon footprints. While some of their claims are a bit dubious, they have significantly increased the efficiency of GPUs, TPUs and other data processing technology. 

 

It’s not just the technology, it’s the culture


As AI and automation become more prevalent, so do scandals involving their unintended consequences.

Take, for example, the recent Charles Schwab robo-advisor saga. In June 2022, Charles Schwab agreed to pay $187 million to settle an SEC investigation into alleged hidden fees charged by the firm’s robo-advisor, Schwab Intelligent Portfolios. As reported by the Washington Post, “The Securities and Exchange Commission accused Schwab — which controls $7.28 trillion in client assets — of developing automated advisory products that recommended investors keep 6 percent to 29.4 percent of their holdings in cash, rather than invest them in stocks or other securities. Investors stood to gain significant income if that money had been invested; instead Schwab used the cash to issue loans and collect interest on those funds.” In other words, it was [allegedly] designed to make Charles Schwab money, not the client.

Though the settlement does not require Charles Schwab to admit any wrongdoing, it’s easy to see how something like this could easily happen. The humans behind technology (i.e. programmers, product marketers, etc.) are conditioned from the very first day they enter the workforce to prioritize profitability above all else. It’s only natural that these biases would be reflected in the technology they create. 

However, that does not mean that these outcomes can't be avoided. By integrating ethical decision making into every step of the development and operationlization process, you can minimize ethics-related risks. 

 


RECOMMENDED