Some recent headlines have reported disturbing news about respected and respectable scholars falsifying or just ignoring data conclusions in scholarly papers. This is another example of the skepticism many of us have with the shifts in misinformation flooding our inboxes and newsfeeds, compelling each of us to exercise our critical thinking skills. And the examples we’re referring to aren’t even results of AI. It is human error, strong bias at play, or manipulative intention for one purpose or another. This leads us to another topic in our continuing explorations of human motivation. Why do we lie? Why do we cheat? Kevin Novak takes a deeper dive on this discussion about the issues and the people and actions that have been in the news recently.
Kevin Novak sets the table with his opening statement: Whatever you think about the U.S. government or our elected officials, it does have guardrails in place to protect its citizens. For pharma and food products, it’s the FDA. For workplace safety there’s OSHA. For mobility safety, it’s the Department of Transportation. For safe investments, there’s the SEC. For consumer protection, there’s the Federal Trade Commission. For AI and emerging tech, there’s nothing.
Kevin Novack, digital strategist and CEO with extensive experience digitizing disparate collections at the Library of Congress, discusses the increasing importance of acknowledging and incorporating social proof into your marketing strategies to showcase the power of your brands and services. The recent wave of digital tools that are built to influence decisions have come under increasing scrutiny as we have learned, they may not be all that trustworthy. Examples include TikTok and its power to influence and even change the behaviors of impressionable next gens. Or Instagram’s role in enabling body shaming and mocking others. And more recently the overwhelming impact of ChatGPT, and the fascination with and growing use of thousands of apps and services built on OpenAI. Novack asks – but can you trust it? And responds – probably about as much as you can trust all online listings and crowdsourced input, which are the sources of GPT’s recommendations. From the user perspective, discerning fact from fiction, when interacting with your organization, is only becoming more critical.