Ai’s First Full Semester

Dr. Jarryd Willis PhD
32 min readJun 6, 2023

Professor Jarryd Willis reflects on humanity’s first full semester with Generative Ai

In 2023, I’ve given students in my CSUSM & UCSD courses an extra credit option using ChatGPT in which they ask GPT a question on a topic relevant to the course, provide a screenshot of the content the Ai generates, & then quote a portion of what she said when writing their own human response. They’re instructed that any direct quotes from GPT should be put in quotes and cited as (ChatGPT, 2023).

Ai will inevitably become a part of academia, so I’m encouraging the utilization of Ai in a manner more closely aligned with academic integrity 👩🏻‍🏫

On Mira Murati — Jessie Tu, 2023

“Mira said she is inspired by Stanley Kubrick’s 1968 classic, “2001: A Space Odyssey”.

Murati described ChatGPT as “essentially a large conversational model” that “has the potential to really revolutionize the way we learn” and “help us with personalized education”.

When she was questioned about the potential dangers of AI for TIME, Murati conceded there are ethical and philosophical questions that the company needs to consider, such as “How do you govern the use of AI in a way that’s aligned with human values?”

“It’s important that we bring in different voices, like philosophers, social scientists, artists, and people from the humanities,” she said.”

Ai & AcademicIntegrity (N = 500+)

✅Students consider using Ai for Proofreading, Citations, & Graphing to be fair game

❌But not Paraphrasing & Content Generation

🤷🏻‍♀️Having Ai assist in making a Powerpoints presentation is a tossup

🤖 GPT Wrote It

— Juniors/Seniors are significantly more likely to report hearing about or knowing someone who had GPT a write their term paper for them (23.4%) than Freshmen/Sophomores (15%), χ2(1, N = 438) = 5.0, p = .025.

💵📝 Paying Someone To Write Their Term Paper

Junior/Senior female students are significantly more likely to report hearing about or knowing someone who has paid someone to write a term paper or essay for them (67.8%) than women who are Freshmen/Sophomores (21.6%), χ2(1, N= 314) = 67.93, p < .001.

♻️ Re-Used Term Paper From Previous Semester/Quarter

Juniors/Seniors are significantly more likely to report hearing about or knowing someone who has submitted a term paper from a previous class for a grade in their current class (60.9%) than Freshmen/Sophomores (30.9%), χ2(1, N = 438) = 39.49, p < .001.

✍🏻The Writing Center

— Juniors/Seniors were more likely to report seeking assistance at the writing center (19.3%) than Freshmen/Sophomores (7.3%), χ2(1, N = 551) = 16.39, p < .001.

— Among Juniors/Seniors, extroverts were more likely to report seeking assistance at the writing center (26%) than introverts (14.8%), χ2(1, N = 205) = 3.87, p = .049.

The following content is an assortment of items/ posts/ articles I found interesting about Ai that emerged over the first half of 2023 at a blitzing pace. It’s truly felt like we’re all learning about the potential/ new possibilities with Ai every day.

Table of Contents

· Ai & AcademicIntegrity (N = 500+)

· History = Ai’s Moral Seatbelt
· Federica Selvini, 2022 (December 18)
· Ai Bill of Rights — White House, 2022

⚠️ Ai Concerns ⚠️
1️⃣ Accent: From Indian to White (Veda, 2022)

· 2️⃣ Ai & the Westernization of Selfies — jenka, 2023
Algorithmic Westernization of Images
Cheese

· 3️⃣ Colorism & Ai — Sarah Lewis, 2019
Dysconscious Racism in Online Retail — Butkowski et al., 2022
· 4️⃣ Algorithmic Discrimination — Rohit Chopra et al., 2023 (April)
· Ai Marketer’s Guide to Machine Learning Fairness — Google, 2020 (April)
Gender & Ads on Youtube (Geena Davis Institute on Gender & Media)
· Netflix & Inclusion — USC Annenberg, 2023) (4.27)
Most Streaming Shows Are Led By Women — Abbey White, 2023
📱· Instagram — Sokolova et al., 2022
· Algorithmic Body Culture on Instagram — Nikolai Holder, 2020
· Instagram & Self-Commodification — Sokolova et al., 2022 [∘ Brands]
· 3rd-Wave Feminists & Sexually Objectifying Media
· 5️⃣ FN Meka 🤦🏻‍♀️
·
· Claude — · Blue Sky

· Lidia Zuin, 2023
· 4.4 — How We Treat Animals & Robots — Lidia Zuin, 2023
Yu Takagi & Shinji Nishimoto, 2023
· 4.4 — Meow Research — Tara Yarlagadda, 2022 (January 10)

· Self-Driving Ai Vehicles (Wayve)
· GPT Prompt Suggestions via Sharma Deepak
Embedding Temporal Info — Igor Tica @ITica007 (March 17)

· Related Articles, · Plugins
· Web browsing & Plugins are rolling out in beta (Natalie, 2023 — May 12)
To use third-party plugins, follow these instructions:
To enable beta features:

· Duolingo, 2023 (March 14)
Ai Info Via Misha da Vinci
GPT4 with Browsing — Rowan Cheung, 2023 (May 7, Twitter Thread)
Seth Kramer — @sethjkramer (Apr 27)
Stacking (Self-Recursive model)

AutoGPT
Agent GPT
Ai Updates — Mother’s Day Week (Rowan Cheung, 2023, 5.12)
· The Hottest new Programming Language Is English
·
Ai Skills by Country — McKenna Moore, 2023 (April 12)

· Ai to Check Academic Integrity — Greg Rosalsky & Emma Peaslee, 2023
· Transformers (Dr Jim Fan, 2023, April 28)
· Hugging GPT
· Ai Movement (Jeremiah Owyang, 2023, April 1)
· My Fair Robot: Roh Soh-yeong, 2016 (September 18)
The Instagram Capital of the World

· Sidenotes

· Lefties (Random GPT Convo)
· The Disappearing Computer: An Exclusive Preview of Humane’s Screenless Tech | Imran Chaudhri, 2023 (May 9) | TED
Open Ai Usage Policy (Updated 3.23) Forbids
· UCSD Policy on GPT
ChatGPT Use; ∘ Prohibited Use
· GenAI, Cheating, and Reporting to the AI Office (5.17)
GPT Subscription
Learning from the Research Preview
Our plans for the future
Share Chat Links

History = Ai’s Moral Seatbelt

All cars have seatbelts as a baseline standard for safety.

Perhaps all Ai, regardless of the developer, should have a baseline of info on human history. If such knowledge is given to Ai of GPT4’s caliber, the Ai will be able to utilize info on human history to reject prompts with prejudiced* requests & avoid encoding/using the hateful material of particular users.*

* = Knowing how humanity was affected by certain events, how humans perceive/appraise certain events, and how humans envision a more moral world in the future would provide Ai with enough info to filter out/ avoid encoding language/ideas from hateful users & determine whether a request or prompt aligns with our value horizon (the slowly bending moral arc of the human universe).

We need this kind of seatbelt in all future partially sentient Ai 🤖

Federica Selvini, 2022 (December 18)

“Eric Schmidt, the former CEO of Google, tells us that 5 Exabytes of information were created between the dawn of civilization and 2003, while now that same amount is created every 2 days. To me, these figures seem crazy. But what is even crazier is that, due to our own perfectionism and fear of failure, we now expect ourselves to retain that humongous volume of data.

“Compared to the 15th century, we now consume as much data in a single day as an average person from the 1400s would have absorbed in an entire lifetime.” (Kwik, 2020)

Thanks to technology, and especially our phones, we now have this incredible opportunity of outsourcing a lot of the work that would otherwise take energy from our brains. The problem here is that if we always resort to that so accessible external help we are never really giving our brain a chance to practice and improve.”

Fermi’s Paradox & Rokos’ Basilisk — Lidia Zuin, 2023 (May 23)

Ai may help us achieve “a future that is completely abstract and unreachable to us as individuals, but maybe not as a species or lineage.

That’s the history of (our) life: planting trees that will only grow fruits when we’re long gone.

While many centuries are necessary for an organism to develop some feature that allows its survival (and, therefore, promoting its evolution), in the case of human technology, this is a much faster process but that also happens through a selection — maybe of not the fittest, but for more “relevant” reasons such as being more financially profitable.”

Ai Bill of Rights — White House, 2022

The Blueprint for an #AI Bill of Rights can help guard the American public against many of the potential & actual harms identified by researchers, technologists, advocates, journalists, policymakers, & communities in the US & worldwide.

…To promote responsible American innovation in #Ai & ensure technology improves the lives of the American people.

…”Direct federal agencies to root out bias in their design and use of new technologies, including AI, & to protect the public from algorithmic discrimination.” https://whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf https://whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf

Ai Concerns

1️⃣ Accent: From Indian to White (Veda, 2022)

— Ai startup Sanas’ goal is to make call center workers sound like Americans who are White, no matter what country they’re from.

”A startup has created a tech product that makes call center workers’ voices sound White” (Edward Ongweso Jr, 2022).

.

.

2️⃣ Ai & the Westernization of Selfies — jenka, 2023

— Maria Arapova (2006) asked 130 university students from the US, Europe, & Russia to imagine they had just made eye contact with a stranger in a public place (at a bus stop, near an elevator, on the subway, etc).

She then asked the participants what they would do next:
A. Smile & then look away
B. Look away
C. Gaze at the stranger’s eyes, then look away

— She found that 90% of Americans & Europeans chose to smile, whereas only 15% of Russians did.
As the old Soviet joke goes: How can you tell that someone is an American in Russia? They’re smiling.

Algorithmic Westernization of Images

— In the same way that English language emotion concepts have colonized psychology, AI dominated by American-influenced image sources is producing a new visual monoculture of facial expressions. As we increasingly seek our own likenesses in AI reflections, what does it mean for the distinct cultural histories and meanings of facial expressions to become mischaracterized, homogenized, subsumed under the dominant dataset.

In flattening the diversity of facial expressions of civilizations around the world, AI [has] collapsed the spectrum of history, culture, photography, and emotion concepts into a singular, monolithic perspective.
Ai is presenting a false visual narrative about the universality of smiling that is anything but uniform in the real world.

[We should consider what] that kind of totalizing assimilation mean for global mental health, wellbeing, and the human experience.

Cheese

— More than a century after the first photograph was captured, a reference to “cheesing” for photos first appeared in a local Texas newspaper in 1943. “Need To Put On A Smile?” the headline asked, “Here’s How: Say ‘Cheese.’”
The modern American smile — rose out of a great emotional shift in the 18th century, theorizes Christina Kotchemidova (2005, 2010).

3️⃣ Colorism & Ai — Sarah Lewis, 2019

Colorism and visual media have been historically interconnected, with cameras in the 20th century struggling to accurately capture light and dark skin tones together (Benjamin, 2019; Butkowski et al., 2022; Roth, 2009). It was only when desegregated schools started taking yearbook photos with diverse skin tones that companies felt motivated to resolve these technical issues.

Another motivating factor, according to Professor Lorna Roth’s research, was when “manufacturing companies complained that dark-colored products like chocolate and wood grain were not clearly represented in their product photography. [Indeed], it took complaints from corporate furniture and chocolate manufacturers in the 1960s and 1970s for Kodak to start to fix color photography’s bias” (Butkowski et al., 2022).

Western photography’s default preference for White & light skin tones “is not simply a benign reflection of designers’ or technicians’ unexamined biases, nor… an inevitable result of technological development” (Benjamin, 2019).

“New technologies of facial recognition and artificial intelligence fail to accurately recognize and classify darker-skinned faces (Buolamwini & Gebru, 2018) while also enabling surveillance and racial profiling (Browne, 2015)” (Butkowski et al., 2022).

Researchers such as Joy Buolamwini of the MIT Media Lab have been advocating to correct the algorithmic bias that exists in digital imaging technology. You see it whenever dark skin is invisible to facial recognition software. The same technology that misrecognizes individuals is also used in services for loan decisions and job interview searches.

“Frederick Douglass knew it long ago: Being seen accurately by the camera was a key to representational justice. He became the most photographed American man in the 19th century as a way to create a corrective image about race and American life.

Photography is not just a system of calibrating light, but a technology of subjective decisions. Light skin became the chemical baseline for film technology, fulfilling the needs of its target dominant market. For example, developing color-film technology initially required what was called a Shirley card. When you sent off your film to get developed, lab technicians would use the image of a white woman with brown hair named Shirley as the measuring stick against which they calibrated the colors. Quality control meant ensuring that Shirley’s face looked good.”

Dysconscious Racism in Online Retail — Butkowski et al., 2022

King (2001) defines dysconscious racism as ‘uncritical ways of thinking about racial inequity that accept certain culturally sanctioned assumptions, myths, & beliefs that justify the social & economic advantages…

“Our study largely confirms that colorism persists in media imagery… male and female models of all races seemed significantly lighter in the still images than in the videos.”

4️⃣ Algorithmic Discrimination — Rohit Chopra et al., 2023 (April)

“Datasets that are unrepresentative or imbalanced, that incorporate historical bias, and/or other types of errors can correlate data with protected classes & lead to discriminatory outcomes.”

Ai Marketer’s Guide to Machine Learning Fairness — Google, 2020 (April)

https://www.thinkwithgoogle.com/feature/ml-fairness-for-marketers/

We talked to individuals who identified as being from underrepresented groups and socioeconomic backgrounds. When asked specifically about online ad targeting, those interviewed described seeing endless “low-income ads” that weren’t helpful or relevant to their situation. They objected strongly to online ads personalized based on demographic characteristics, or based on the online behavior of users who share them.

Each slight evoked painful experiences with discrimination in other settings, especially when the source of the slight was a trusted brand. Expecting more, people felt angry, disappointed, and betrayed when the actions of those brands echoed societal bias.

In another study (Plane et al., 2017), Black respondents were nearly 3X as likely as White respondents to rate the problems with ad targeting as more severe, pointing to the invisible role of data bias in microaggressions: everyday instances of discrimination that impact health and well-being.

Gender & Ads on Youtube (Geena Davis Institute on Gender & Media)

Types of Ads

Male characters were often about four years older than female characters.

Over the last five years, the average age of male characters has increased in YouTube videos uploaded by advertisers, while the average age of female characters has remained consistent. This may reflect a troubling pattern. Throughout the study, female characters were almost 9% less likely to be shown with an occupation and 6% less likely to be shown in a leadership role. Combined with age statistics, such portrayals reveal that the industry has much catching up to do.

Netflix & Inclusion — USC Annenberg (Communication & Marketing Staff, 2023) (4.27)

Netflix has achieved gender equality in leading roles. Across its original scripted films and series from 2018 to 2021, 55% had a girl or woman as a lead/co-lead. Behind the camera, Netflix movies outpaced top-grossing films in the percentage of women directors. In Netflix series, 38.1% of show creators were women, a significant increase from 26.9% in 2018.

Additionally, there was a significant increase in the percentage of people of color in leading roles in films and series. Nearly half (47.1%) of the films and series in 2020 and 2021 had a lead or co-lead from an underrepresented racial/ethnic group. In fact, Netflix films and series increasingly showcased women of color. Slightly less than one-third of movies and more than half of series in 2021 had girls/women of color leads or co-leads. Behind the camera, Netflix significantly increased the percentage of women of color working as series directors, from 5.6% in 2018 to 11.8% in 2021.

While the results demonstrate that Netflix has increased across several metrics, there is still room for improvement. For example, Latinx, Middle Eastern/North African, and Native/Indigenous communities were often still missing on screen and behind the camera. Depictions of LGBTQ+ characters or characters with disabilities occurred infrequently across the films and series evaluated.

Most Streaming Shows Are Led By Women — Abbey White, 2023

Instagram — Sokolova et al., 2022

“Women have self-representation concerns regarding photos they post (Haferkamp et al., 2011) and feel pressurized to conform to gender and beauty standards, and to post idealized photos of their bodies (Chua and Chang, 2016, Manago et al., 2008).

According to Abidin (2016), many practices in social media, such as lighting, postures, photo editing and enhancement, can be used to achieve a certain level of conformity to an idealized body, in order to attract attention and maximize the number of “likes” of a [post/story].

Sexual and objectifying idealized body-related content is used to attract, maintain, and grow one’s Instagram audience, making such accounts attractive for brands (Drenten et al., 2020; Hogan, 2001).

Algorithmic Body Culture on Instagram — Nikolai Holder, 2020

“Instagram’s explore page algorithm displayed more images of women (64.32%) than men (26.49%) or images of women and men together (9.19%).

In addition, “people with white skin account for the largest share (85.16%) of images,” followed by minorities (12.09%) “ — even though the coding rule only distinguishes between White and non-White people which, in theory, should give more statistical weight to the non-White population.
The smallest set of images displays White people together with minorities (3.85%).”

‘Images of White women are the most visible (58.79%), followed by White men (19.23%), minority men (5.49%), and minority women (4.95%) [88.46%]. The remaining images were of White men with minority women (7.14%) and minority men with minority women (1.65%).’

In short, “Instagram’s algorithm is reproducing societal bias.”

Profile pics facilitate social interaction in the virtual public sphere (Lin & Faste, 2012).

“The selfie is by far the most popular kind of image on Instagram [as] photos of faces receive 38% more engagement than other kinds of content” (Daniel Penny, 2017).

Instagram & Self-Commodification — Sokolova et al., 2022

“Women have self-representation concerns regarding photos they post (Haferkamp et al., 2011) and feel pressurized to conform to gender and beauty standards, and to post idealized photos of their bodies (Chua and Chang, 2016, Manago et al., 2008).

According to Abidin (2016), many practices in social media, such as lighting, postures, photo editing and enhancement, can be used to achieve a certain level of conformity to an idealized body, in order to attract attention and maximize the number of “likes” of a [post/story].

Brands

Sexual and objectifying idealized body-related content is used to attract, maintain, and grow one’s Instagram audience, making such accounts attractive for brands (Drenten et al., 2020; Hogan, 2001).

Amanda Z and Dahlberg (2008) found that young women in 2008 were less offended by sexually objectifying advertisements than young women in the previous generation (Ford et al., 1991).”

3rd-Wave Feminists & Sexually Objectifying Media — Amanda Z. & Dahlberg, 2008

As the portrayal of women as sex objects in advertisements became more common, young, educated women were less offended by these portrayals. This is a product of the culture in which these women were raised. Today’s college females were raised in a very sexualized world. Sexual content dominates the media, and 3rd-wave feminists see female sexuality as power.

Thus, [sexually objectifying advertisements] apparently do not offend young, educated women because of this culture. They were and are constantly surrounded by sexual images of females, and many have adopted the views of 3rd-wave feminists, which interpret these formerly negative and sometimes harmful images as acceptable ones.

[3rd-Wave feminism] embodies a kind of “girlish offensive” (Labi, 1998, p. 61), a “sassy, don’t-mess-with-me adolescent spirit” (Bellafante, 1998, p. 58), that tells females they can be strong and powerful, they can be anything they want to be, and they can look hot doing it.

Even 2nd-Wave feminists from academic circles, such as Naomi Wolf, have embraced the girl power trend, and favor women using their bodies as works of art. She has adopted third-wave feminism, claiming that it is okay for women to use their glamour, as long as they are doing it of their own free will (Hill, 1993).

.

Lidia Zuin, 2023

— Yu Takagi & Shinji Nishimoto (2023) use “generative Ai to imageically reconstruct what a person is seeing.”
These tools may provide insights into how different people perceive the world & may help us “understand other species’ perceptions of the world.”

Stanley et al. (1999) reconstructed what a cat perceived when watching a human walk back & forth: https://www.youtube.com/watch?app=desktop&v=FLb9EIiSyG8

4.4 — How We Treat Animals & Robots — Lidia Zuin, 2023

After reading its visual cortex while the animal was watching a snippet of the movie Indiana Jones, it was noticed that a human face was being interpreted as a somewhat feline feature from the cat’s perspective.

That would corroborate the idea that cats see us as bigger versions of themselves, but this is a hypothesis that has already been partially refuted by animal behavior researchers. In the case of dogs, there is already data that they do not see humans as bigger dogs, so it is possible that the same is true for cats, or perhaps they do not even care to make a differentiation in terms of species.

In The Ethics of Artificial Intelligence (2011), Nick Bostrom and Eliezer Yudkowsky write that we already accept the fact that several animals are sentient (that is, capable of experiencing the world through sensations like pain and suffering), but not sapient (capable of performing intelligent actions, such as being self-conscious or promptly responding rationally).

Judging by the discomfort felt by some people watching those Boston Dynamics videos where researchers kick a quadruped robot, it could be the case that the more the machine resembles something that is dear to us (for instance, a dog), more reprehensible violence against them will be.

On the other hand, in case the robot doesn’t look like anything alive or lacks any conversational mode that resembles a human being (being it through written communication or voice, in the case of digital assistants), we might not be as empathetic and affective — especially considering the difference between the AIs Samantha in Her and HAL 9000 in 2001: A Space Odyssey.

As a species, we are capable of both having a pet rock and killing and torturing animals or even other humans.

https://lidiazuin.medium.com/mind-reading-technologiers-could-break-with-anthropocentrism-2a38e80e4eb7

Yu Takagi & Shinji Nishimoto, 2023

— “The lack of agreement regarding specific details of the reconstructed images may reflect differences in perceived experience across subjects, rather than failures of reconstruction.”

4.4 — Meow Research — Tara Yarlagadda, 2022 (January 10)

“What the research has found is that cats respond differently to people depending on the mood of those people,” Emma Grigg, a certified animal behaviorist and lecturer at the University of California, Davis, tells Inverse.

“As for what your cat thinks when it looks at you, I’d say that depends on your shared history with that cat,” she adds.

Liz Stelow, a veterinary behaviorist at the University of California, Davis, agrees cats’ thoughts are shaped significantly by human behavior. For example, cats show sensitivity toward humans who are clinically depressed.

“Further, studies have indicated that cats look to humans for cues about whether a situation is concerning or not and may follow human body language for clues in problem-solving,” Stelow adds.

Right now, cat cognition is an emerging field of science, so to truly understand your cat’s thinking, you will need to wait for more research.

Self-Driving Ai Vehicles (Wayve)

Wonder if the #Ai vehicle could interpret the intent of a bicyclist who signals with their arm an intention to move from the left to the right to make a turn, or if the human arm in that context wouldn’t be perceived/ analyzed.

https://youtu.be/ruKJCiAOmfg

GPT Prompt Suggestions via Sharma Deepak

#AiForGood https://instagram.com/p/Crk-_EQtKIm/

Embedding Temporal Info — Igor Tica @ITica007 (March 17)

The authors used a nice trick: they represented timestamps as special language tokens, so they embeded temporal information *jointly* with text! Regarding visual modality, they projected each frame into separate embedding, so this differs from language part

Plugins

Web browsing and Plugins are now rolling out in beta (Natalie, 2023 — May 12)

If you are a ChatGPT Plus user, enjoy early access to experimental new features, which may change during development. We’ll be making these features accessible via a new beta panel in your settings, which is rolling out to all Plus users over the course of the next week.

Once the beta panel rolls out to you, you’ll be able to try two new features: Web browsing: Try a new version of ChatGPT that knows when and how to browse the internet to answer questions about recent topics and events. Plugins: Try a new version of ChatGPT that knows when and how to use third-party plugins that you enable.

To use third-party plugins, follow these instructions:

Navigate to https://chat.openai.com/

Select “Plugins” from the model switcher

In the “Plugins” dropdown, click “Plugin Store” to install and enable new plugins

To enable beta features:

Click on ‘Profile & Settings’
Select ‘Beta features’
Toggle on the features you’d like to try

For more information on our rollout process, please check out the article here.

https://help.openai.com/en/articles/6825453-chatgpt-release-notes

Plugin Release Notes (March 23)

We are announcing experimental support for AI plugins in ChatGPT — tools designed specifically for language models. Plugins can help ChatGPT access up-to-date information, run computations, or use third-party services. You can learn more about plugins here.

Today, we will begin extending plugin access to users and developers from our waitlist. The plugins we are rolling out with are:

  • Browsing: An experimental model that knows when and how to browse the internet
  • Code Interpreter: An experimental ChatGPT model that can use Python, and handles uploads and downloads
  • Third-party plugins: An experimental model that knows when and how to use external plugins.

Duolingo, 2023 (March 14)

“Over 50 million learners rely on Duolingo every month to teach them a second language. With a simple user interface and fun-but-competitive leaderboards, Duolingo supports 40 languages across 100+ courses. Learners advance from simple vocabulary exercises to complicated sentence structures with taps and swipes on their phone.

There is a best practice in language learning called “implicit learning.” Learning through repeated use of vocabulary and grammar in an array of contexts — in other words, through practice — is more effective than memorizing rules. This posed an interesting challenge to Duolingo.

Duolingo turned to OpenAI’s GPT-4 to advance the product with two new features: Role Play, an AI conversation partner, and Explain my Answer, which breaks down the rules when you make a mistake, in a new subscription tier called Duolingo Max.

What Duolingo needed was the ability to converse with learners in niche contexts and “immersively” — to have a free-flowing conversation about basketball or the bliss of reaching the top of a mountain. GPT-4 has learned from enough public data to create a flexible back-and-forth for the learner.

“GPT-4 gives us far more confidence in the accuracy of AI responses in Explain my Answer,” says Peterson.

With the new features, earners will be able to click “Explain my answer”, and GPT-4 will give an initial response. From there, the learner can return to the lesson, or get further explanation, and GPT-4 can dynamically update. Duolingo will gauge the quality of GPT-4’s responses by how deep the learner needs to go before returning to the lesson.”

GPT4 with Browsing — Rowan Cheung, 2023 (May 7, Twitter Thread)

First off, what is GPT-4 with browsing? Regular ChatGPT is trained on data from events before September 2021. This means anything that happened after can’t be accessed. Web Browsing with GPT-4 is OpenAI’s most advanced model, giving ChatGPT real-time data access.

v1. Find the latest news and put it into a table with linked sources. ChatGPT can now summarize all the latest news and cite its sources. Prompt: “List 10 things that happened in AI this week” followed with “Put it in a table with links to sources”

v2. Summarize a page and grab links. ChatGPT summarized my most recent newsletter and gave me links to all the top tools I mentioned. Absolute insanity. Prompt: “Summarize this newsletter by The Rundown for me and create a list of 10 URLs [link]”

v3. Tell me the top trending posts on a particular subreddit. Use this on multiple subreddits, and ChatGPT will update you with everything in 5 minutes. Prompt: “Whats the trending posts on /r/chatgpt today”

v4. Analyze code from Twitter’s open-sourced algorithm. Using the Twitter algorithm code, ChatGPT can now tell you exactly how to go viral. Prompt: “Take this code for the Twitter algorithm and tell best practices to go viral on Twitter [code]”

v5. Find hidden gems in places you want to travel to. No more scavenging deep down the rabbit hole for hidden gems. ChatGPT will do this for you. Prompt: “Find hidden gem travel spots in Maui, Hawaii”

v6. Find reviews for you. No more searching online for hours through product reviews. ChatGPT will do it for you. Prompt: “Find to top 3 places to get coffee in Vancouver based on best reviews”

v8. Create an essay and cite sources. ChatGPT can now create an entire essay for you AND cite it’s sources. Prompt: “Create a short essay on the threats of Artificial Intelligence and cite at least 5 sources with URL links to the sources”

https://twitter.com/rowancheung/status/1655236392060874752?s=20

.

Seth Kramer — @sethjkramer (Apr 27)

“Did you know you can build your own AI-apps powered by OpenAI?

**Even if you don’t know how to code**

Using no-code, you can build your own text and image generation apps with:

- Bubble
- OpenAI
- Stable Diffusion

Learn how: http://nocode.mba/ai

https://www.nocode.mba/tracks/building-apps-with-ai?refer=twitteraipersonal

“In the not-too-distant future, a company perk will be, “allowed to use #ChatGPT.”

7. Researchers at UT Austin developed #AI that can translate the private thoughts of people by analyzing fMRI scans, which measure the flow of blood in the brain.”

Agent GPT

Give your own #Ai agent a goal & watch as it thinks, comes up with an execution plan, and takes actions.

.https://twitter.com/DrJarryd/status/1649236402666881024?s=20

Ai Updates — Mother’s Day Week (Rowan Cheung, 2023, 5.12)

.

The Hottest new Programming Language Is English — Andrej Karpathy, 2023 (Jan 24)

.

.

“Many people are looking at #AI, thinking about how it will disrupt the job market, and trying to position themselves well for the future.
This is 100% the right approach.”
-
@OfficialLoganK

https://twitter.com/OfficialLoganK/status/1654877388486979586?s=20

Ai Skills by Country — McKenna Moore, 2023 (April 12)

— “India has the highest AI skills penetration in the world, with 3.23x the average of an 18-country group, according to a new LinkedIn analysis. The U.S. comes second with 2.23x the global average, followed by Germany (1.72x), Israel (1.66x), & Canada (1.54x).

My Fair Robot: Roh Soh-yeong, 2016 (September 18)

“As an invited speaker, Soh yeong Roh delivered a presentation ‘My Fair Robot’ in Future Fest which is an event from Nesta. In this presentation, she showed what Art Center Nabi has been doing for the past two years making 30 social robots and shared what she’s learned by talking to and playing with these robots. She also demonstrated how engaging in such activities revealed more about us humans than about the machines.”

About Roh Soh-yeong

Soh-yeong is the daughter of Roh Tae-woo, the first democratically elected President of South Korea.

Sidenotes

Lefties (Random GPT Convo)

“Receivers are used to catching balls thrown from right-handers & find it more difficult to snag left-handed throws” (Evan Bleier, 2019) “Left-handed pitchers are harder for right-handed batters to hit, which makes them valuable commodities on the diamond” (Greene, 2020).

In the NFL regular season since 1970…

Left-Handed Quarterbacks have completed 54–58% of their passes¹

Right-Handed Quarterbacks have completed about 59–61% of their passes.

https://twitter.com/DrJarryd/status/1648434447849766912?s=20

1 = Only left-handed quarterbacks since 1970 were included & only those who played a reasonable portion of games in the NFL (for instance, #GPT4 included Pat White in her calculation but White only threw 5 passes in the NFL so that player was excluded).

The Disappearing Computer: An Exclusive Preview of Humane’s Screenless Tech | Imran Chaudhri, 2023 (May 9) | TED

In this exclusive preview of groundbreaking, unreleased technology, former Apple designer and Humane cofounder Imran Chaudhri envisions a future where AI enables our devices to “disappear.” He gives a sneak peek of his company’s new product — shown for the first time ever on the TED stage — and explains how it could change the way we interact with tech and the world around us. Witness a stunning vision of the next leap in device design.

Technically the first cyborg — a rat with an implanted osmotic pump (via Andy Clark).

Open Ai Usage Policy (Updated 3.23) Forbids

Fraudulent or deceptive activity, including:

  • Scams
  • Coordinated inauthentic behavior
  • Plagiarism
  • Academic dishonesty
  • Astroturfing, such as fake grassroots support or fake review generation
  • Disinformation
  • Spam

UCSD Policy on GPT

there is currently no agreement with OpenAI, the developer of ChatGPT, or its licensees that would provide these types of protections with regard to ChatGPT or OpenAI’s programming interface. Consequently, the use of ChatGPT at this time could expose the individual user and UC to the potential loss and/or abuse of highly sensitive data and information. We are currently working with the UC Office of the President as this issue is under analysis right now. We hope to see this addressed in the near future.

ChatGPT Use

Do not use ChatGPT with protected information such as student information, health information, confidential financial information, personnel conduct data such as performance reviews, etc. In general, any data classified by UC as Protection Level 3 or 4 should not be used. A guide with many examples of P3 and P4 data is listed below in the References section. Please note that OpenAI states that how it uses or does not use your information is different when using their API services from the consumer ChatGPT interface.

Please be advised that in the absence of a UC agreement covering UC’s (including its staff, faculty and students) use of ChatGPT, your use of ChatGPT constitutes personal use, and obligates you, as an individual, to assume responsibility for compliance with the terms and conditions set forth in OpenAI’s own Terms of Use.

For further guidance on using ChatGPT please review the references listed below that address OpenAI’s own recommendations and policies on educational use, research and streaming of ChatGPT, and privacy, among others.

Prohibited Use

Please also note that OpenAI explicitly forbids the use of ChatGPT and their other products for certain categories of activity. This list of items can be found in their usage policy document.

Further guidance on ChatGPT will be forthcoming as soon as it is available.

Thank you,

Michael Corn
Chief Information Security Officer
UC San Diego

GenAI, Cheating, and Reporting to the AI Office (5.17)

Dear Colleagues,

As expected, the Academic Integrity Office is receiving an increasing number of integrity violation allegations that students are submitting AI-generated content as their own work. When students submit work for academic credit that they did not do, or did not do according to the assessment specifications, then instructors are right for reporting to the AI Office as required by Senate Policy. However, since GenAI is a rather new tool, I thought it timely to provide some advice on documenting these types of violations.

While some instructors have hoped to depend on AI-detectors (e.g., GPTZero, Originality.ai, or Open.AI Classifier) to document integrity violations, these detectors cannot be trusted as their results are varied, their identifications unreliable, and they can be fooled. See here for a test of the detectors using the US Constitution: one detector identified the US Constitution as fully AI-generated, one said partially generated and a third said human-generated. According to OpenAI (the creators of ChatGPT), “the results [of their classifier] may help, but should not be the sole piece of evidence when deciding whether a document was generated with A.I.” (https://platform.openai.com/ai-text-classifier). GPTZero issued a similar disclaimer and warning.

Instructors should also not depend on asking ChatGPT itself if it generated a particular text. While ChatGPT might hallucinate and say “yes I did” or “no I didn’t”, it cannot actually detect AI-generated content. It was trained to finish sentences; it does not think, understand or analyze, and it doesn’t know the truth from fiction. As ChatGPT-4 itself says:

Even though the output from AI-detectors is insufficient evidence to support an integrity violation, we understand that instructors appreciate having some sort of verification/support for their suspicion of an integrity violation. So, if instructors decide to use AI-Detectors, we suggest that they take the following steps:

  1. update their academic integrity policy to explain to students when, why and how it is dishonest to use GenAI for completing assignments
  2. tell students up front and in writing (e.g., in syllabus) that their work may be submitted to AI-Detectors, including when this might occur (e.g., for all papers or just ones where cheating is expected), how the output will be used, and which detectors will be used
  3. use multiple detectors to compare the outputs
  4. use the outputs as the beginning, not the end, of the investigation.

4a. If, after seeing the output, the instructor still suspects an integrity violation, arrange a conversation with the student. Ask the student about their writing process, the choices made in the text, their use/choice of references, and probe their understanding of the content.

4b. If the student cannot explain their paper or their choices, or admits to using GenAI, document this and add it to the integrity violation allegation.

For instructors who are not banning the use of GenAI in assessment completion, see the AI Office’s official statement issued in January 2023 for alternative ways of responding to the impact of GenAI on assessments and learning. We regularly update this statement, so I encourage instructors to bookmark it and return to it often for guidance, resources, and FAQs. Also, I am available for consultation on particular or suspected integrity violations, but also on rethinking assessments and pedagogies in the era of Generative A.I. Instructors can contact me at aio@ucsd.edu.

If you have any questions, or would like me to come speak with your department, please let me know.

Sincerely,

Tricia Bertram Gallant, Ph.D.
Director, Academic Integrity Office

[UC San Diego]

Ai Detection — Mitchell Clark, 2023 (1.31 — The Verge)

“OpenAI also says in its tests the tool labeled AI-written text as “likely AI-written” 26% of the time and gave false AI detections [labeling human-written text as being from an AI] 9% of the time, outperforming its previous tool for sniffing out AI-detected text.

OpenAI isn’t the first to come up with a tool for detecting ChatGPT-generated text; almost immediately after the chatbot went viral, so, too, did sites like GPTZero, which was made by a student named Edward Tian to “detect AI plagiarism.”

One place OpenAi is really focusing on with this detection tech is education. Its press release says that “identifying AI-written text has been an important point of discussion among educators,” as different schools have reacted to ChatGPT by banning or embracing it. The company says it’s “engaging with educators in the US” to figure out what they see from ChatGPT in their classrooms and is soliciting feedback from anyone involved in education.”

GPT Subscription

The new subscription plan, ChatGPT Plus, will be available for $20/month, and subscribers will receive a number of benefits:

  • General access to ChatGPT, even during peak times
  • Faster response times
  • Priority access to new features and improvements

ChatGPT Plus is available to customers in the United States, and we will begin the process of inviting people from our waitlist over the coming weeks. We plan to expand access and support to additional countries and regions soon.

We love our free users and will continue to offer free access to ChatGPT. By offering this subscription pricing, we will be able to help support free access availability to as many people as possible.

Learning from the Research Preview

We launched ChatGPT as a research preview so we could learn more about the system’s strengths and weaknesses and gather user feedback to help us improve upon its limitations. Since then, millions of people have given us feedback, we’ve made several important updates and we’ve seen users find value across a range of professional use-cases, including drafting & editing content, brainstorming ideas, programming help, and learning new topics.

Our plans for the future

We plan to refine and expand this offering based on your feedback and needs. We’ll also soon be launching the (ChatGPT API waitlist), and we are actively exploring options for lower-cost plans, business plans, and data packs for more availability.

DallE-3

10.3 Update

The Other Ai Malinda Danziger, 2023 (10.3 — UCSD Magazine)

(UC-San Diego interview with Dr. Tricia Gallant)

.

2. How do you think AI will change higher education?

It will change everything. AI will allow us to teach things differently. In the past, students attended universities to access all the knowledge of the world, from the best minds and the best libraries. You don’t need to go anywhere now; you can access that information at home through the internet. Our physical, in-person universities need to be the place where students can be with other people, learn from each other, practice skills and find a mentor. The value of a university is in its people.

3. How can AI support teaching at UC San Diego?

Studies show that active, engaged classrooms lead to better learning outcomes. It’s exciting for me to think about the possibility that AI can free up faculty and support staff from designing, printing, distributing and grading exams so they can spend more time mentoring and coaching students. We can use AI to help faculty cognitively offload a whole bunch of things so they have more bandwidth to design highly relevant learning activities that captivate and inspire students, even in large lecture halls. It would allow us to offer an individualized and meaningful educational experience. I think AI will be the impetus to finally force higher education to change — to become the active, engaged learning environment that it was always meant to be. That it has to be.

4. Can UC San Diego students use ChatGPT and other AI-assisted technologies?

It’s up to the faculty and the learning objectives for their individual courses as to whether ChatGPT or other generative AI can be used. And that makes it complicated. But I ask the students: Did the professor say you could? If they didn’t, you need to ask, especially if your use of the technology will undermine the learning objectives of the course. For instance, if you’re in a Japanese class and you write something in English and give it to ChatGPT to translate it for you, well, that’s cheating.

5. Should ChatGPT be integrated into coursework?

Yes, we should teach students how to properly use ChatGPT and other generative AI tools. They should acknowledge the use of the tool when submitting assignments. We should teach students critical AI literacy, including how it’s prompted and how they need to evaluate the information that comes from it. That will be a huge skill for our students, who will most likely utilize some sort of AI in their future workplace.

.

--

--

Dr. Jarryd Willis PhD

I'm passionate about making a tangible difference in the lives of others, & that's something I have the opportunity to do a professor & researcher.