> At its heart, education is a project of guiding learners to exercise their own agency in the world. Through education, learners should be empowered to participate meaningfully in society, industry, and the planet.
I agree but I have never seen an education system that had this as a goal. It's also not what society or employers actually want. What is desired are drones / automatons that take orders and never push back. Teaching people about agency is the opposite of that.
We are so stuck in a 19th century factory mindset everywhere, GenAI is just making it even more obvious.
Employers want a high-agency leadership class and drones for the individual contributors.
There are systems that nurture agency and leadership. They are the private schools and the Ivy League universities. And many great companies.
Most people don't want to be leaders and be judged based on impact. They want to be judged based on effort. They followed all the rules when writing their essay and should get an A+ even though their essay is unconvincing. If they get a bad mark, their response is to create a petition instead of fixing the problems.
Maybe we should attack our culture of busywork and stop blaming educators for failing to nurture agency.
While you are likely correct about systems, I have known quite a few individual educators who have the goal of helping their fellow humans learn about their agency in the world.
I attended a public school system which, while at times did falter in various ways, did a fairly good job meeting its stated mission that was more or less exactly that.
I witnessed far more personal political pressure and cajoling than corporate/future employer. Where I went to school the pressure on schools was usually from parents, students, and local groups concerned with civil matters. I had (until recently) indirect (and sometimes direct) exposure to this because one of parents was an educator and a senior member of their department in an adjoining district to the one I attended.
Where I went to college, it was always very clear to me what was shaped by industry vs. research and academia. I went to a research university for an uncommon hard-science degree and so there was a lot of employer interest, but the university cleverly drew a paywall around that and businesses had to pay the university to conduct research (or agree to profit sharing, patent licensing, etc). There was a clear, bright line separating corporate/employer interest from the classroom.
> "Youngsters who are immersed in this popular culture are accustomed to large doses of passive, visual entertainment. They tend to develop a short attention span, and expect immediate gratification. They are usually ill equipped to study mathematics, because they lack patience, self-discipline, the ability to concentrate for long periods, and reading comprehension and writing skills."
For context, the essay is from 1996. You could have told me this is from the current year and I would have believed you.
Sarcasm? We actually weren't allowed to take any kind of calculator into any of our advanced maths exams in University (and I'm talking just 15 years ago, not when they were newfangled things).
You want to limit the use of AI in schools just the way you want to limit calculators: ensure the student can do the math without calculators, even when the computation is hard and then teach them to use the calculator as a tool to help them move faster.
Restricting AI completely or introducing it too early, both would be harmful.
(Theses days) it's hard to know what you mean by this and whether you're being sarcastic.
No you don't give arithmetic students calculators for their exams, and you expect them to know how to do it without one.
Yes you probably give professionals who need to do arithmetic calculators so they can do it faster and with less errors.
Giving calculators to people who don't know how, why and/or when to use them will still get you bad results.
Giving calculators to someone who doesn't have any use for one is at best a waste of money and at worst a huge waste of time if the recipient becomes addicted to calculator games.
woah, calculators aren't burning energy and water on big servers running technology that violates copyright and gives money to capital oriented companies
A calculator uses some type of finite precision arithmetic internally. If you run afoul of the limits of this arithmetic system, it may very confidently give you a wrong answer!
you should educate yourself both at the ecological impact of data centers and the economics of running a water facility. it's just too simplistic to skim down 'water' used into a thing that turns it into rain and then it's captured to be used again. good luck with your next naive comment saying something like: rain is almost distilled water and treatment isn't required, so it can go into data centers directly
don't also forget people living nearby these facilities constantly facing drains due to the HIGH requirement of a server
Same argument people use against cows and almonds. The water is used and recycled. This is the weakest possible environmental argument you can possibly make. I’ll wait for the citizens to riot about their “wasted” water
How do you think that works exactly? That data centers cause more rain than would otherwise fall? How is that not an ecological change? Where does it come from?
The cat is out of the bag. Kids will use AI to write papers, learn topics, cheat on take-home tests, etc. Only a completely closed-off environment with no access to the internet could prevent this.
The best option is to change the incentives. 95% of kids treat school as a necessary hurdle to enter the gentry white-collar class. Either make the incentives personal enrichment instead of letter grades or continue to give students every incentive to use AI at every opportunity.
> GenAI is a threat to student learning and wellbeing.
This blanket dismissal is not going to age well, and reads like a profession lashing out.
With the right system prompt, AI can be a patient, understanding, encouraging, non-judgemental tutor that adapts and moves at the student's pace. Most students can not afford that type of human tutor, but an AI one could be free or very affordable.
> AI can be a patient, understanding, encouraging, non-judgemental tutor
Groan... no it can't. It can simulate all those things, but at the moment, "AI" can't be patient, understanding, and whether judgemental or non-judgemental.
OK it can be encouraging. "You're one good student, $STUDENT_NAME!" (1).
I can say the exact same thing about you or anybody else. You can’t be patient, understanding, encouraging, non-judgmental tutor. You can only simulate it.
I really can’t understand why people don’t understand this. What am I missing?
Philosophical zombies are supposed to be a thought experiment to demonstrate that solipsism and nihilism are stupid, not a rhetorical device to equate human minds to linear algebra statistical parrots.
If you are using the most commonly available AI and have an average ability of perfecting a search term, right now AI is not a particularly useful tool in learning anything. It is far too inaccurate to learn anything challenging. The key term here is could, and yes it is possible but there is nothing yet to say we shall get there.
It's not a blanket dismissal, it's a fact in context. It should read like a profession lashing out - that's what it is.
AI has enormous upsides and enormous downsides. The "you're going to look so dumb in the future" dismissal is lazy. Inevitability does not make something purely beneficial.
It's a fallacious line of thinking that's disappointingly common in tech-minded people (frequently seen in partnership with implications that Luddites were bad or stupid, quotes from historical criticisms of computers/calculators, and other immature usage of metaphor).
I'd respect the statement more if it acknowledged that AI had some benefit, or potential benefit in the future, but they did not want to use it currently.
"You have not discovered a potion for remembering, but for reminding; you provide your students with the appearance of wisdom, not with its reality. Your invention will enable them to hear many things without being properly taught, and they will imagine that they have come to know much while for the most part they will know nothing. And they will be difficult to get along with, since they will merely appear to be wise instead of really being so.” -- someone wise, or was he?
Previously: An open letter from educators who refuse the call to adopt [printed books, ballpoint pens, calculators, computers, the internet] in education
I'm not the biggest fan of AI for everything but you couldn't create something more of a dagger to the heart of the current education system. If you are in the U.S., carefully watch for the D party to turn on AI in their messaging and you'll be witness to the strong influence that teachers unions have on them. Disagree me all you want, but keep your eyes open, I guarantee you'll see it soon.
Interesting thought but my impression is that the democrats are much more beholden to other forces at play in the school system. I have friends who are teachers in the public school system, have been active in the union, and are indeed against AI in the classroom (although they're hardly rabid or unreasonable about it). On the other hand, the school administrators and IT departments are much more aggressive about pushing AI on them and pressing them to work it into the classroom somehow. Considering that the democrats are largely captured by corporate interest, and considering that tech/AI is one of the biggest corporate interests there is right now... I just don't see things playing out the way you predict.
Should we teach our kids to outsource their thinking to those genai services where the big clouds control the gate? It would be less of an issue if local genai with comparable capability is more accessible to general public.
> Current GenAI technologies represent unacceptable legal, ethical and environmental harms, including exploitative labour, piracy of countless creators' and artists' work, harmful biases, mass production of misinformation, and reversal of the global emissions reduction trajectory.
It's really annoying that political stuff always pollutes things. I largely agree with the position about GenAI being bad for education, but that position is not strengthened by tacking on a bunch of political drivel.
Whether you agree or disagree, I am happy to see a community putting out (in writing even) their problems with AI as it exists.
To the degree it is possible I would like to think the AI community would try to address their issues.
I understand that some of the items in their open letter show a complete incompatibility with AI — period. But misinformation, harmful biases, energy resource use should be things we all want to improve.
I don't think resource use is any business of teachers to be honest.
The problem with AI currently is that the students have figured out how to use it to cheat, but the teachers haven't figured out how to use it to teach.
AI is here, we need to figure out how to use it effectively and responsibly. Schools should be leading on this, instead of putting their heads in the sand and hoping it goes away.
AI is turning into a cult that's dividing us into those who support it and those who reject it. Arguments on both sides are flimsy, as no one really understands what it is. People see it as a black-box magic crystal.
I find this all-or-nothing attitude extraordinary. Chatbots are the best personal tutors you'll ever find and I tell students so. Do you need to understand Mitosis for Bio 101? Ask your favorite chatbot. Then ask what daughter cells are - a question you might be too afraid to ask in class because maybe it was covered yesterday you weren't listening. Then ask why there are no "son" cells - which you'd also be to afraid to ask about in class but you want to know.
You can ask every dumb question. You can ask for clarification on every term you don't understand. You can go off on tangents. You can ask the same thing again ten minutes later because you forgot already.
No teacher or tutor or peer is going to answer you with the same patience and the same depth and the same lack of judgement.
Is it good enough for a grad student working on their thesis? Maybe not. Is it good enough for a high school student. Almost certainly. Does it give this high school student a way to better _really_ understand biology because they can keep asking questions until they start to understand the answers. I think absolutely.
There is no ethical generative AI. Meaning fully permissioned datasets, end-to-end. Not yet scientifically possible. So 100%, everyone who claims this, is lying, usually by omission, and some BS startup isn't going to invent this.
In my open letter, I wouldn't say "ethical" or "environmental" or any of these intersectional things because you're giving space for lies.
People want ethical AI even if it's impossible. So we get aspirationally ethical AI. Meaning, people really want to use generative AI, it makes life so easy, and people also want it to be ethical, because they don't want to make others upset, so they will buy into a memetic story that it is "ethical." Even if that story isn't true.
Aspirationally ethics already got hundreds of millions of dollars in funding. Like look at generative AI in the media industry. Moonvalley - "FULLY LICENSED, COMMERCIAL SAFE" (https://www.moonvalley.com) - and yet, what content was their text encoder trained on? Not "fully licensed," no not at all. Does everything else they make work without a text encoder? No. So... But people really want to believe in this. And it's led by DeepMind people! Adobe has the same problem. Some efforts are extremely well meaning. But everyone claiming expressly licensed / permissioned datasets is telling a lie by omission.
It's not possible to have only permissioned data. Anthropic and OpenAI concede, there's no technology without scraping. Listen, they're telling the truth.
> Further, GenAI adoption in industry is overwhelmingly aimed at automating and replacing human effort, often with the expectation that future “AGI” will render human intellectual and creative labor obsolete. This is a narrative we will not participate in
When every learner gets the high quality support and tutoring they need, all around the world, then we can talk about what you're unwilling to participate in. Until then, may every learner get a fantastic tutor via GenAI.
> At its heart, education is a project of guiding learners to exercise their own agency in the world. Through education, learners should be empowered to participate meaningfully in society, industry, and the planet.
I agree but I have never seen an education system that had this as a goal. It's also not what society or employers actually want. What is desired are drones / automatons that take orders and never push back. Teaching people about agency is the opposite of that.
We are so stuck in a 19th century factory mindset everywhere, GenAI is just making it even more obvious.
Employers want a high-agency leadership class and drones for the individual contributors.
There are systems that nurture agency and leadership. They are the private schools and the Ivy League universities. And many great companies.
Most people don't want to be leaders and be judged based on impact. They want to be judged based on effort. They followed all the rules when writing their essay and should get an A+ even though their essay is unconvincing. If they get a bad mark, their response is to create a petition instead of fixing the problems.
Maybe we should attack our culture of busywork and stop blaming educators for failing to nurture agency.
While you are likely correct about systems, I have known quite a few individual educators who have the goal of helping their fellow humans learn about their agency in the world.
I attended a public school system which, while at times did falter in various ways, did a fairly good job meeting its stated mission that was more or less exactly that.
I witnessed far more personal political pressure and cajoling than corporate/future employer. Where I went to school the pressure on schools was usually from parents, students, and local groups concerned with civil matters. I had (until recently) indirect (and sometimes direct) exposure to this because one of parents was an educator and a senior member of their department in an adjoining district to the one I attended.
Where I went to college, it was always very clear to me what was shaped by industry vs. research and academia. I went to a research university for an uncommon hard-science degree and so there was a lot of employer interest, but the university cleverly drew a paywall around that and businesses had to pay the university to conduct research (or agree to profit sharing, patent licensing, etc). There was a clear, bright line separating corporate/employer interest from the classroom.
One of my favorite essays on a similar topic: https://sites.math.washington.edu//~koblitz/mi.html
Neal Koblitz's "The Case Against Computers in Math Education".
Wow. Now there's a quote:
> "Youngsters who are immersed in this popular culture are accustomed to large doses of passive, visual entertainment. They tend to develop a short attention span, and expect immediate gratification. They are usually ill equipped to study mathematics, because they lack patience, self-discipline, the ability to concentrate for long periods, and reading comprehension and writing skills."
For context, the essay is from 1996. You could have told me this is from the current year and I would have believed you.
> You could have told me this is from the current year and I would have believed you.
Agreed. It's a matter of degree, and I wonder what reaching the eventual limit (if there is one) looks like.
There’s a platonic dialogue that has basically the same sentiment.
Makes sense. You also don't give calculators to students of arithmetic.
Sarcasm? We actually weren't allowed to take any kind of calculator into any of our advanced maths exams in University (and I'm talking just 15 years ago, not when they were newfangled things).
Can’t tell if you are serious but I will assume you are.
Why not? Seems like a logical conclusion.
1. Introduce the concept.
2. Demonstrate an intuitive algorithm.
3. Assist students as they practice and internalize the algorithm.
4. Reinforce this learning by encouraging them to teach each other.
5. Show them how to use tools by repeating this process with the tool as the concept.
You want to limit the use of AI in schools just the way you want to limit calculators: ensure the student can do the math without calculators, even when the computation is hard and then teach them to use the calculator as a tool to help them move faster.
Restricting AI completely or introducing it too early, both would be harmful.
I'm not really convinced. This sounds reasonable but I can't formulate a good argument in favor.
(Theses days) it's hard to know what you mean by this and whether you're being sarcastic.
No you don't give arithmetic students calculators for their exams, and you expect them to know how to do it without one.
Yes you probably give professionals who need to do arithmetic calculators so they can do it faster and with less errors.
Giving calculators to people who don't know how, why and/or when to use them will still get you bad results.
Giving calculators to someone who doesn't have any use for one is at best a waste of money and at worst a huge waste of time if the recipient becomes addicted to calculator games.
The person you're responding to has clearly used the word "student". What on earth are you on about?
I interpreted "students of arithmetic" as anyone that practices arithmetic - similar to "students of medicine", etc.
Seems like a reasonable expansion of the concept to me. Why the aggressive dismissal?
woah, calculators aren't burning energy and water on big servers running technology that violates copyright and gives money to capital oriented companies
You can rely on the answer a calculator gives you. There's no danger that it will simply be confidently wrong.
A calculator uses some type of finite precision arithmetic internally. If you run afoul of the limits of this arithmetic system, it may very confidently give you a wrong answer!
Some calculators will confidently state incorrect answers to questions like:
(10^15 + 7.2 − 10^15) * 100
How about Pentium II floating point arithmetic?
LLMs are notoriously bad at math but they’re not LMMs so that shouldn’t be surprising.
If you want an LLM to do math you just ask it to write a program with tests.
you people cling to that straw like it's a lifeboat
Water? I didn’t realize that when you use water it disappears from the universe, or even from earth, or even from the local ecosystem
Energy doesn't disappear, but obviously it moves from useful forms to unuseful forms. Same with water. Your sarcasm just comes off as naive arrogance.
The water is gone? Where did it go?
Converted to steam and carried out of the local ecosystem by wind. From the perspective of anyone downstream the water is gone.
Really? All water that goes into the air is gone forever? And then the wind blows it away? Incredible
Why are you being so antagonistic? Are you ok?
> Water? I didn’t realize that when you use water it disappears from the universe, or even from earth, or even from the local ecosystem
Your own standard includes the local ecosystem.
There aren’t any water riots. It’s called rain. The people shrieking about “wasted water” are unhinged
you should educate yourself both at the ecological impact of data centers and the economics of running a water facility. it's just too simplistic to skim down 'water' used into a thing that turns it into rain and then it's captured to be used again. good luck with your next naive comment saying something like: rain is almost distilled water and treatment isn't required, so it can go into data centers directly
don't also forget people living nearby these facilities constantly facing drains due to the HIGH requirement of a server
some read: https://thereader.mitpress.mit.edu/the-staggering-ecological...
https://aucgroup.net/water-treatment-plant-costs/
Same argument people use against cows and almonds. The water is used and recycled. This is the weakest possible environmental argument you can possibly make. I’ll wait for the citizens to riot about their “wasted” water
If we can freely recycle water how little of it would we need on Earth before we would see an ecological change?
Can you explain why Mesopotamia was once an agricultural Mecca but is now an arid desert?
Or what human event caused the little ice age? https://en.m.wikipedia.org/wiki/Little_Ice_Age
Do you really believe all climate change in world history, which was dramatic and highly disruptive, was human caused?
How do you think that works exactly? That data centers cause more rain than would otherwise fall? How is that not an ecological change? Where does it come from?
The cat is out of the bag. Kids will use AI to write papers, learn topics, cheat on take-home tests, etc. Only a completely closed-off environment with no access to the internet could prevent this.
The best option is to change the incentives. 95% of kids treat school as a necessary hurdle to enter the gentry white-collar class. Either make the incentives personal enrichment instead of letter grades or continue to give students every incentive to use AI at every opportunity.
> GenAI is a threat to student learning and wellbeing.
This blanket dismissal is not going to age well, and reads like a profession lashing out.
With the right system prompt, AI can be a patient, understanding, encouraging, non-judgemental tutor that adapts and moves at the student's pace. Most students can not afford that type of human tutor, but an AI one could be free or very affordable.
"How AI Could Save (Not Destroy) Education" (https://www.youtube.com/watch?v=hJP5GqnTrNo) from Sal Khan of Khan Academy
> AI can be a patient, understanding, encouraging, non-judgemental tutor
Groan... no it can't. It can simulate all those things, but at the moment, "AI" can't be patient, understanding, and whether judgemental or non-judgemental.
OK it can be encouraging. "You're one good student, $STUDENT_NAME!" (1).
1) https://www.youtube.com/watch?v=jRPPdm09xZ8
I can say the exact same thing about you or anybody else. You can’t be patient, understanding, encouraging, non-judgmental tutor. You can only simulate it.
I really can’t understand why people don’t understand this. What am I missing?
Philosophical zombies are supposed to be a thought experiment to demonstrate that solipsism and nihilism are stupid, not a rhetorical device to equate human minds to linear algebra statistical parrots.
Geezus freaking christ.
Now is that a simulation of someone who thinks he's responding to a cretin... or actually the feelings of someone who thinks he's talking to a cretin?
[flagged]
This simulation is too realistic. I’d like to stop the game now please.
If you are using the most commonly available AI and have an average ability of perfecting a search term, right now AI is not a particularly useful tool in learning anything. It is far too inaccurate to learn anything challenging. The key term here is could, and yes it is possible but there is nothing yet to say we shall get there.
My experience in higher education is that students use AI for one of two things:
1. To do the homework because they view classes and grades as a barrier to their future instead of preparation for such.
2. In place of a well crafted query in an academic database.
Most student can not afford the expertise necessary to have AI patient etc.
I think the original phrase was made with the assumption "as it is right now".
I do share concerns of undersigned, even though don't necessarily agree with all statements in the letter.
It's not a blanket dismissal, it's a fact in context. It should read like a profession lashing out - that's what it is.
AI has enormous upsides and enormous downsides. The "you're going to look so dumb in the future" dismissal is lazy. Inevitability does not make something purely beneficial.
It's a fallacious line of thinking that's disappointingly common in tech-minded people (frequently seen in partnership with implications that Luddites were bad or stupid, quotes from historical criticisms of computers/calculators, and other immature usage of metaphor).
I'd respect the statement more if it acknowledged that AI had some benefit, or potential benefit in the future, but they did not want to use it currently.
Maybe if we move from LLMS to real AI it will have benefits.
"You have not discovered a potion for remembering, but for reminding; you provide your students with the appearance of wisdom, not with its reality. Your invention will enable them to hear many things without being properly taught, and they will imagine that they have come to know much while for the most part they will know nothing. And they will be difficult to get along with, since they will merely appear to be wise instead of really being so.” -- someone wise, or was he?
Previously: An open letter from educators who refuse the call to adopt [printed books, ballpoint pens, calculators, computers, the internet] in education
I'm not the biggest fan of AI for everything but you couldn't create something more of a dagger to the heart of the current education system. If you are in the U.S., carefully watch for the D party to turn on AI in their messaging and you'll be witness to the strong influence that teachers unions have on them. Disagree me all you want, but keep your eyes open, I guarantee you'll see it soon.
Interesting thought but my impression is that the democrats are much more beholden to other forces at play in the school system. I have friends who are teachers in the public school system, have been active in the union, and are indeed against AI in the classroom (although they're hardly rabid or unreasonable about it). On the other hand, the school administrators and IT departments are much more aggressive about pushing AI on them and pressing them to work it into the classroom somehow. Considering that the democrats are largely captured by corporate interest, and considering that tech/AI is one of the biggest corporate interests there is right now... I just don't see things playing out the way you predict.
the administrators and IT departments are not in the teachers unions.
Yes... exactly my point.
Every teacher I talked to has said the influence of AI has been negative. Why wouldn't they fight to remove it from the classroom?
They are talking about cheating with it, not replacing teaching with it.
Should we teach our kids to outsource their thinking to those genai services where the big clouds control the gate? It would be less of an issue if local genai with comparable capability is more accessible to general public.
Also,
>global community
As long as global means rich. 0 signatories from China, India, Russia, Pakistan, Bangladesh, Indonesia, Africa.
do you think those have access to computers with AI for their education?
Yes.
> Current GenAI technologies represent unacceptable legal, ethical and environmental harms, including exploitative labour, piracy of countless creators' and artists' work, harmful biases, mass production of misinformation, and reversal of the global emissions reduction trajectory.
It's really annoying that political stuff always pollutes things. I largely agree with the position about GenAI being bad for education, but that position is not strengthened by tacking on a bunch of political drivel.
Whether you agree or disagree, I am happy to see a community putting out (in writing even) their problems with AI as it exists.
To the degree it is possible I would like to think the AI community would try to address their issues.
I understand that some of the items in their open letter show a complete incompatibility with AI — period. But misinformation, harmful biases, energy resource use should be things we all want to improve.
I don't think resource use is any business of teachers to be honest.
The problem with AI currently is that the students have figured out how to use it to cheat, but the teachers haven't figured out how to use it to teach.
AI is here, we need to figure out how to use it effectively and responsibly. Schools should be leading on this, instead of putting their heads in the sand and hoping it goes away.
AI is turning into a cult that's dividing us into those who support it and those who reject it. Arguments on both sides are flimsy, as no one really understands what it is. People see it as a black-box magic crystal.
[dead]
This is absurd. I’ve learned so much from having an LLM tutor me as a I go through a dense book, for example.
I find this all-or-nothing attitude extraordinary. Chatbots are the best personal tutors you'll ever find and I tell students so. Do you need to understand Mitosis for Bio 101? Ask your favorite chatbot. Then ask what daughter cells are - a question you might be too afraid to ask in class because maybe it was covered yesterday you weren't listening. Then ask why there are no "son" cells - which you'd also be to afraid to ask about in class but you want to know.
You can ask every dumb question. You can ask for clarification on every term you don't understand. You can go off on tangents. You can ask the same thing again ten minutes later because you forgot already.
No teacher or tutor or peer is going to answer you with the same patience and the same depth and the same lack of judgement.
Is it good enough for a grad student working on their thesis? Maybe not. Is it good enough for a high school student. Almost certainly. Does it give this high school student a way to better _really_ understand biology because they can keep asking questions until they start to understand the answers. I think absolutely.
There is no ethical generative AI. Meaning fully permissioned datasets, end-to-end. Not yet scientifically possible. So 100%, everyone who claims this, is lying, usually by omission, and some BS startup isn't going to invent this.
In my open letter, I wouldn't say "ethical" or "environmental" or any of these intersectional things because you're giving space for lies.
People want ethical AI even if it's impossible. So we get aspirationally ethical AI. Meaning, people really want to use generative AI, it makes life so easy, and people also want it to be ethical, because they don't want to make others upset, so they will buy into a memetic story that it is "ethical." Even if that story isn't true.
Aspirationally ethics already got hundreds of millions of dollars in funding. Like look at generative AI in the media industry. Moonvalley - "FULLY LICENSED, COMMERCIAL SAFE" (https://www.moonvalley.com) - and yet, what content was their text encoder trained on? Not "fully licensed," no not at all. Does everything else they make work without a text encoder? No. So... But people really want to believe in this. And it's led by DeepMind people! Adobe has the same problem. Some efforts are extremely well meaning. But everyone claiming expressly licensed / permissioned datasets is telling a lie by omission.
It's not possible to have only permissioned data. Anthropic and OpenAI concede, there's no technology without scraping. Listen, they're telling the truth.
I loathe this entire line of "ethical" moral grandstanding.
AI should be trained on all data that is available. For a significant part of the dataset, it's the most useful that data has ever been.
> Further, GenAI adoption in industry is overwhelmingly aimed at automating and replacing human effort, often with the expectation that future “AGI” will render human intellectual and creative labor obsolete. This is a narrative we will not participate in
When every learner gets the high quality support and tutoring they need, all around the world, then we can talk about what you're unwilling to participate in. Until then, may every learner get a fantastic tutor via GenAI.