(Opinion) Lou Cartier: New chatbot threat or opportunity? Yes.

[ad_1]

Lou Cartier (Courtesy/Lou Cartier)

In the short months since Chat GPT, a dramatic new tool in artificial intelligence, first captured the imagination of students and instructors, job seekers and HR pros, neighborhood “googlers” and Big Tech combatants, the earth continues to shift beneath our feet.

It seems the world has accepted the invitation from OpenAI, a mature Silicon Valley startup, to register for a free account and begin experimenting with the company’s new chatbot … a “y’all come” business development strategy once deemed too dangerous.  No longer.

Today, a new improved GPT-4, no longer free (to amateurs) powers Microsoft’s Bing search engine.  To preserve its market dominance, Google is about to counter with Bard.  Urgent consumer demand appears to be thrusting business, education, and “learning organizations” into a brave new world … in ways not fully understood, at a price not fully grasped.

This unfolding innovation in generative artificial intelligence is stunning. In mere seconds, at “no cost” to the consumer (assuming tolerance for pesky “disinformation”), the bot can:

  • Draft style-sensitive written work, from three-page essays to a virtual cookbook, from 10,000-word term papers to an earned score on the Uniform Bar Examination stronger than nine out of 10 human test takers.
  • Produce industry-standard products, from resumes and cover letters to scripts, screenplays and speeches, from polished stand-up comedy bits to coherent, debugged lines of computer code.
  • Generate colorful graphics and “artistic” images, even rudimentary (?) video games, from your written specifications.  Output grows sharper and more aesthetically pleasing every day.  Quality business presentations, free “clip art” in no time.

And yet, amidst the headlong rush to capture your trust, consumer vigilance seems prudent.  Yes, this amazing software recognizes patterns and regurgitates language in a given speaker’s voice.  Yes, as acknowledged by top corporate information officers, “the marginal cost of creating something new is very, very cheap.”

Brave new world, indeed.

And yet … while “conversational,” generative AI requires a user’s trust without verification.  For example, documentation of sources, as a college professor expects of a comparative business analysis or “argumentative essay,” is lacking.  Early reports find seemingly authoritative generated facts “simply made up.” Evidently, the system is not designed to fact check … nor can it point me to the nearest Lenten fish fry.

Given the search landscape (internet, Twitterverse, and such), some liken this experimentation to “exposing the public to dangerous chemicals.”  Evidently Google, Microsoft, and other corporate enterprise reason that feedback gathered directly from likely users help clear the system of “bugs.”

In time, they expect to overcome unintended (or deliberate) character assassination and invasion of privacy in commercial web browsers and search engines.  They pledge to moderate “irresponsible” exposure to racism, sexism, and worse.

In education, my primary bailiwick, colleagues insist upon academic integrity.  One measure of success is that students learn how to find and discern credible information, make a well-reasoned argument, and express their logic and feelings gracefully, authentically, humbly.

Granted, one finds deep thinkers in academe, government, business, and technology who worry that generative A.I. poses “a real and imminent threat to society.” They fear bogus information going unchallenged, “popular” opinions equated with verified fact, malicious code escaping common cybersecurity safeguards.

Could a mere computer program “transform the human cognitive process, with “machine thinking” replacing our own?  Is it prudent to expect more of tech than we do of ourselves?  For years, TED audiences have been cautioned, lest we give tacit permission to technology taking us to places we don’t want to go.

Fortunately, deep thinking colleagues help me ponder the philosophical and ethical overtones of our culture’s insistence on fast results and impatience with “slow, ethical, methodical deliberation.” A friend’s email illustrates:

“As computing has advanced, more and more of us expect to be able to put in an input (A) and get an output (B) without knowing how the computer gets from A to B.  The risk is that we do not understand how to get from A to B either.”  Unless the computer itself fesses up, “we humans do not recognize when it has performed an error.”

Could it be that “thinking is not simply spitting out a conclusion, but piecing together how to reach and support that conclusion?”  Hmm.

Back on the farm, the imminent challenge to Aims faculty is to:

  • Develop strategies to “detect, acknowledge, and embrace” perceived threats and opportunities from this transformative technology.  Remain committed to transparency and accountability, to student creativity without plagiarizing, to academic honesty policies responsive to current tensions.
  • Create more purposeful learning activities and exams, write better essay prompts, and introduce students to self-paced AI modules that deepens their cognitive skills.  In my discipline, this facilitates their move to “case-based learning and team projects.”  (https://bizcombuzz.com/2023/01/24/the-newest-ai-challenge-to-teaching-chatgpt)
  • Continue to weave our personal expertise and creativity into “the classroom,” whatever the instructional mode du jour.  Deliver purposeful, value based, individualized “coaching” to our students, the kind that machines cannot replicate.

Yet.

— Lou Cartier teaches at Aims Community College, focusing on legal and ethical challenges facing business and the practical “soft skills” that underlie their people’s success.  As a member of the faculty’s elected leadership team, he participates in college-wide planning and policy review.  Views and opinions here are solely the author’s and do not necessarily reflect those of Aims.

[ad_2]

Source link