As the year 2023 (or 02023 as the folks at the Long Now Foundation suggest) draws to a close I will post a few comments.
I will begin by mentioning that I sold my condo in San Jose, California in July and I now live in Tempe, Arizona. I am getting to know Tempe as well as the greater Phoenix area. My email addresses and phone numbers have not changed and still work. If someone has a valid need for my physical address then please contact me via email, I am not going to publicize my address here for every junk mail provider to harvest. And no, I did not leave California for any common reason such as taxes, crime or any of the other reasons thrown about in media these days. So I suggest some restraint in those who are thinking of using my departure from California as one more example to justify their prior opinions.
Another personal item is that in October I finished the three year course of medication I was taking. Following the cancer surgery for a tumor in July 2020 and after a few weeks of stabilization I started the medication intended to prevent recurrence of the cancer. The medication did have side effects however according to my oncologist I tolerated them better than many patients. I feel better now that I am no longer taking the medication. I am glad I am done and I hope I do not need to go back on it again.
Rather than a commenting on the year as a whole I am going to write a few ideas I am considering concerning ChatGPT and other similar AI technology. Articles about ChatGPT were popping up in many publications in early 2023 and the topic was in many conversations. One could encounter a "AI will kill us all" story or
a "AI will take all of the jobs and we will all starve" story. And we have all read the ChatGPT making up references to legal cases that do not exist and about educators worried that students will use the technology to write their papers. Artists and authors are concerned that they will be replaced.
I am not dismissing these concerns nor am I adopting them uncritically. My suggestion is that there be some serious thought about the situation. So what follows are some thoughts which I have had during the year. These are preliminary and not perfect, I hope they are helpful.
According to the Radiological Society of North America website human radiologists out perform AI in accurately identifying the presence and absence of three common lung diseases, the article is reporting on a study published in the journal Radiology. If we take RSNA article as correct we might look at the future where in perhaps 10 years we have a situation where AI is slightly better than human however an AI and human together is slightly better than an AI alone. Now imagine another ten year where perhaps an AI alone is as good as an AI and human together; the human is superfluous. Or what happens if the AI alone is better than an AI and human together; that is the human is just getting in the way. I am not making the two consecutive ten year periods as predictions, I am using ten years because twenty years split into to ten year segments is a manageable exercise for most readers. As we leave 2023 to enter 2024 we can look back to 2004 and 2014 to see how much progress has been made. So I think it is not unreasonable and perhaps should be expected that by 2044 an AI working alone will better than either a human alone or a human with AI together. If that is the case then the standard of medical care will be an AI working alone doing radiology. I
suspect that most people will say "well this involves the health of people so the humans need to step aside" or some similar attitude.
Now consider the case of therapists such clinical psychologists. Many of us can remember Eliza or similar early conversation response programs and the common wisdom was that they were cute but would never replace a trained therapist. What happens if AI become better at providing service as a therapist? Even if the AI therapist is beneficial for a subset of the population that prefers to deal with an AI rather than a human. Would not the guideline of providing "best care" lead to the AI replacing human therapists in at least in some situations?
Now consider artists and writers. If we take as valid the assertion that positive depictions of LGBTQ+ characters in written and visual media helped lead to greater acceptance of the LGBTQ+ individuals then we might want to consider those persons who created those depictions. In the case of visual media that would be producers, directors, writers, artists and those who act or voice the part. We remember the Writers Guild of America screenwriters strike of 2023 and one of the issues in the strike related to the use of AI in creating scripts. That strike was settled and the question did not occur in news reporting as frequently. However let us think about the situation in a few years when the contract expires. Looking at the future imagine there is an AI which paired with some humans can create scripts which have humor, tension, angst, grief and all of the other things which make a show successful as well as tends to make persons more accepting of and kinder to other people. Would everyone be happy that AI was involved because of the positive outcome? But what if the number of humans involved was half the number previously needed? What happens when the AI does not need humans at all but is superior is creating stories that build feelings of compassion and benevolence. Is this worth the trade-off?
At this point I suspect that most readers will have noticed the progression from a medical setting through a therapy setting to what might be called a creative setting. Certainly this is an example of an old rhetoric devise, it would be interested to imagine how persons might react to these if the order of presentation was reversed.
It is worth noting that in the therapy example it was implied that the person undergoing counseling was "better" after counseling from the AI? But what definition of "better" do we use? Consider a very pious religious person who goes to an AI therapist in order to gain insight on how to get along better with coworkers. For this person the idea of "better" might be to continue as devoutly religious while developing skills and insights to more successfully interaction with coworkers. Now consider this person after the therapy who gets along much better with their coworkers and is also now not religious at all. This person might well say their current concept of "better" includes being rid of religion in addition to getting along with their coworkers. However I suspect that this person's coreligionists would not consider "better" as descriptive of the post therapy situation. I mention this because a similar change of the meaning of "better" might occur with a human therapist and we want to make sure we do not forget it.
And certainly we can imagine the possibility of an AI producing screenplays which would tend to lead people to be intolerant of others. Humans have historically been good at creating propaganda and stirring animosity toward other persons based on race, ethnicity, religion and many other factors so it is reasonable to think that AI could as well.
So what is the answer? In my opinion there is no single "answer" and I am
not persuaded by those who are in a "moral panic" and demand immediate drastic action. I seldom find "moral panic" arguments persuasive. What I suggest for 2024 is some serious consideration and analysis. I recommend the following:
1. Every analysis or proposed action put forward should contain a detailed addendum with the flaws and shortcoming of the analysis or proposed action. This addendum should be done by those who prepared the analysis or action. Everyone needs to seriously be critical of their own analysis and proposals. This addendum should include what tests could be done to show it false in part or in whole.
2. Every criticism of an analysis or proposed action must begin with a restatement of the analysis or action in the most reasonable manner and be acceptable to those proposing it. This approach has several names including "Principle of Charity", "Steelwoman", "Steelman" and is useful to reduce what are commonly called "strawman" arguments. In addition if there is something with which the opponents of a proposal agree then that should be acknowledged explicitly.
These two guidelines are not necessarily perfect in every circumstance. They are at best only a beginning however I think they are a very valuable beginning and much more work is needed. The astute reader will note that these guidelines are based on the ideas of Sir Karl Popper, W. W. Bartley and Daniel Dennett. If you are not familiar with Pan Critical Rationalism and with Rakpoport's Rules (aka Dennett's Rules) I suggest familiarizing yourself with them. I will also note that I first encountered the term steelman in a podcast by Julia Galef.
Hopefully these comments are helpful and we can work for a better 2024. For those interested in reading a classic story that touches on some of these issues I recommend the story by Jack Williamson titled "With Folded Hands" and there is a fine discussion of the story on Wikipedia. I have written a bit of short fiction titled The Story Of The Story which looks at the use of AI in a near future setting, at least it was near future when I wrote it a couple of years ago.