Skip to content

IP 2 - AI

Published: at 03:22 PM

IP2 - Artificial Intelligence

1. Who were these people, and how did/does each contribute to the development of artificial intelligence?

Toggle to see Chat-GPT 4’s interpretation.

Alan Turing

Alan Turing

Revolutionized computer science and cryptography, but was shunned for his sexuality. Turing would argue that intelligence is difficult to define. Instead, his proposed Turing Test argues that if a human cannot detect a computer in conversation, it is therefore intelligent. (Biography, 2020)

Alan Turing

Alan Turing -GPT

Turing was a mathematician and cryptanalyst. He laid the theoretical foundations for computer science and artificial intelligence (AI). His Turing Test was a significant contribution, proposing that a machine's ability to exhibit intelligent behavior equivalent to humans could be a measure of intelligence.

John McCarthy

John McCarthy

His research helped coin the term ‘Artificial Intelligence’, and pushed the field forward. McCarthy’s ‘Advice Taker’ proposes a hypothetical program in which logic is used to represent information. He presented common sense reasoning as a defining trait of computer intelligence.(AIWS, 2022)

John McCarthy

John McCarthy -GPT

McCarthy, a computer scientist, coined the term "Artificial Intelligence" in 1955. He played a pivotal role in defining the field, organizing the famous Dartmouth Conference, and believed AI could achieve human-level intelligence through symbolic processing and logical reasoning.

Herb Simon

Herb Simon

Nobel laureate in economics, pioneer across many fields, and co-creator of the first AI program. Simon viewed intelligence as the ability to problem-solve and make decisions through the processing and storing of information. He applied this thinking to his contributions in both computer science and psychology. (The Decision Lab, n.d.)

Herb Simon

Herb Simon -GPT

Simon, a cognitive psychologist and computer scientist, was instrumental in developing theories of human problem-solving and decision-making, which he applied to AI. He believed intelligence could be analyzed as a set of symbol manipulation processes, which could be mimicked by computers.

Marvin Minsky

Marvin Minsky

Built the first neural network simulator, and was a major contributor to AI research. Minsky proposed intelligence as many different cognitive ‘agents’ which, when functioning as a ‘society’, form the basis of intelligent systems. His theory “Society of the Mind” was applied throughout his work in computer science and robotics. (“Society of Mind,” 2023)

Marvin Minsky

Marvin Minsky -GPT

Minsky, a cognitive and computer scientist, co-founded the MIT AI Lab. He worked on developing theories and models to replicate human cognitive processes in machines, advocating for a multi-disciplinary approach to understanding and building intelligent machines.

Timnit Gebru

Timnit Gebru

Perhaps the most important contemporary AI ethicist and researcher. Gebru’s cautionary research into the consequences and negative externalities of AI has exposed the decelerationist vs accelerationist battle in the field. Gebru views intelligence partly as the ability to derive coherence based on contextual and communicated information. An ability her research states AI does not possess, but is masterful at imitating. (Bender et al, 2021)

Timnit Gebru

Timnit Gebru -GPT

Gebru, a contemporary computer scientist, focuses on AI ethics, particularly in algorithmic fairness and data biases. While not directly addressing the definition of intelligence, her work emphasizes the social impact of AI and the need for responsible AI development and deployment.

2. How do “machine (programming) languages” differ from human (natural) ones?

Duncan

Human language is broad communication, while programming languages are specific instructions. They are a firmly defined set of grammar and syntax which computers have been designed to compile down into the most basic of binary logic and execute. (Patterson, 2021) Unlike human language, programming language cannot be interpreted based on contextual information, or involve any sort of descriptivism. (Ana, 2018) It is a set of instructions that are either interpretable by the machine, or not. If it is interpretable, it will be executed by the machine regardless of its veracity. Human language is limitlessly flexible, and ever-evolving, in contrast to programming language which is methodically designed and iterative.

Chat-GPT

Machine (programming) languages differ significantly from human (natural) languages in their structure, purpose, and complexity. Programming languages are designed for computers, providing precise, unambiguous instructions for performing specific tasks. They follow strict syntax rules and are limited in scope compared to the fluidity and versatility of natural languages. Human languages are inherently more complex, nuanced, and adaptable, used for a wide range of expressive purposes, including conveying emotions, abstract concepts, and cultural nuances. Natural languages evolve organically over time, influenced by social and cultural factors, while programming languages are systematically developed and updated by humans.

3. How does “machine (artificial) intelligence” differ from the human version?

Duncan

Unlike human intelligence, artificial Intelligence is limited by the data provided to it by humans. For specialized, well-defined tasks, such as Go or Starcraft 2, the data available is limitless, and AI such as DeepMind are able to beat even the best of humans. This conforms with Minsky’s “Society of Agents” theory. Meanwhile, an autonomous vehicle, when presented even the most benign of obstacles, will often completely freeze up. A scenario human intelligence would easily be able to adapt to. This is classified in Chollet (2019) as generalized intelligence that can respond and adapt in a wide variety of scenarios.

Chat-GPT

Machine (artificial) intelligence differs from human intelligence in fundamental ways. Artificial intelligence is designed to mimic certain aspects of human cognitive abilities, such as learning, problem-solving, and pattern recognition, but it operates within a limited, predefined scope set by its programming. It lacks the breadth and depth of human intelligence, which encompasses emotional understanding, moral reasoning, consciousness, and creative thinking. Human intelligence is holistic, shaped by biological, psychological, and social factors, and evolves through complex interactions with the environment. In contrast, machine intelligence excels in processing vast amounts of data and performing specific tasks with speed and accuracy unmatched by humans.

4. How does “machine learning” differ from human learning?

Duncan

Human learning is a lengthy, energy efficient process that can take significant effort, but allows us to operate responsively and dynamically. By comparison, machine learning is incredibly fast, able to incorporate and digest immense amounts of data in a short period of time. Relative to human learning, this process is incredibly energy inefficient and results in major negative externalities. (Bender et al, 2021) Both human and machine learning are subject to biases. But the broad swaths of data being fed to machine learning models are filled with all of humanity’s worst biases, no matter how much effort is put into its curation. (Heilweil, 2018)

Chat-GPT

Machine learning, a subset of artificial intelligence, differs from human learning in several key aspects. It involves algorithms learning from data, identifying patterns, and making decisions with minimal human intervention. Machine learning focuses on specific tasks, improving performance as it processes more data. In contrast, human learning is a complex, holistic process involving cognitive, emotional, and social dimensions. Humans learn from diverse experiences, not just data, and can understand context, abstract concepts, and apply learning flexibly across different scenarios. While machine learning excels in handling large datasets and specific pattern recognition, human learning is adaptable, creative, and capable of understanding nuanced meanings.

5. Turing Test?

As I read it, the machine text seems so obviously artificially generated. My responses interweave references to the assignment readings, integrate real examples, and generally struggle to fit everything I want to say in the word-count limit. Meanwhile the machine is spitting out polished, sterile, bulletpoints.

Our question 2 covers similar points, with my writing more meandering and the machines laden with adjectives. Question 3 is very distinct, with my answer focusing on the agent vs generalist theory of intelligence. The machine’s is a much more thorough answer, as it describes the emotional, moral, and creative aspects of human intelligence. My question 4 touches on the negative-externalities of machine learning, namely the biases and environmental impact, not a topic the machine touches on.

Each question in the assignment is asked in the order of “machine” compared to “human”. The machine’s answers all strictly stick to the order of this format, while all of my answers begin with a reference to the “human” side. Perhaps my bias is showing?

Ultimately, the machine’s answers are correct, thorough, and concise. But they they feel almost clinical, as though the machine needs to fit as many relevant bulletpoints into each sentence as possible. Are my answers missing some of these bulletpoints? Absolutely, but I’m hopeful that the average reader would be less likely to have their eyes glaze over reading mine.

References

Alan Turing. (2020, July 22). Biography. https://www.biography.com/scientists/alan-turing
Ana. (2018, November 1). Human languages vs. Programming languages. Medium. https://medium.com/@anaharris/human-languages-vs-programming-languages-c89410f13252
Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜. Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 610–623. https://doi.org/10.1145/3442188.3445922
Chollet, F. (2019). On the Measure of Intelligence (arXiv:1911.01547). arXiv. http://arxiv.org/abs/1911.01547
Heilweil, R. (2020, February 18). Why algorithms can be racist and sexist. Vox. https://www.vox.com/recode/2020/2/18/21121286/algorithms-bias-discrimination-facial-recognition-transparency
Herbert Simon. (n.d.). The Decision Lab. Retrieved January 29, 2024, from https://thedecisionlab.com/thinkers/computer-science/herbert-simon
Petterson, E. (2021, December 21). Are programming languages actually languages? Lucidchart. https://www.lucidchart.com/techblog/2021/12/21/are-programming-languages-actually-languages/
meme me (Director). (2023, January 20). Fully autonomous car glitches in a construction zone and gets stuck 🚙. https://www.youtube.com/watch?v=i96nws0MtDg
Museum of Science (Director). (2022, December 8). The Biggest Differences Between AI and Human Intelligence Explained by MIT Student. https://www.youtube.com/watch?v=oRwlBtEty-Y
Roose, K. (2023, December 10). This A.I. Subculture’s Motto: Go, Go, Go. The New York Times. https://www.nytimes.com/2023/12/10/technology/ai-acceleration.html
Society of Mind. (2023). In Wikipedia. https://en.wikipedia.org/w/index.php?title=Society_of_Mind&oldid=1157356670
This week in The History of AI at AIWS.net – John McCarthy proposed the ‘advice taker’ in 1959 | AIWS.net. (n.d.). Retrieved January 29, 2024, from https://aiws.net/the-history-of-ai/this-week-in-the-history-of-ai-at-aiws-net-john-mccarthy-proposed-the-advice-taker-in-1959-2/
Hao, K. (2020, December 4). We read the paper that forced Timnit Gebru out of Google. Here’s what it says. (n.d.). MIT Technology Review. Retrieved January 29, 2024, from https://www.technologyreview.com/2020/12/04/1013294/google-ai-ethics-research-paper-forced-out-timnit-gebru/