Episode Transcript
[00:00:02] Speaker A: Seon180. Coming up in 5, 4, 3, 2, 1.
Be bold.
Take risks.
Lead by example.
Believe in your power.
Say what you feel, Mean what you say.
Hi. Join me at seon180 on this journey of discovery and advancement. Hello again and welcome back to Seon180. I'm your host, Leslie Ann Seon. This podcast is all about igniting conversations and empowering lives with real talk, real people and real change. With over 20 episodes across four seasons, we feature inspiring voices from the Caribbean and around the world, diving into a vast array of diverse, dynamic and engaging topics. Topics that truly matter to our Caribbean community.
I invite you to check out my website at seon180.com or your favorite podcast platform to catch our latest episodes, and do follow us on Facebook and Instagram for updates, advice and engaging discussions. Today, I'm delighted that we'll be chatting with Dr. Jessica Parker and Dr. Kimberly Becker, founders of the academic AI platform Moxie. Our conversation will center on AI and its impact on higher education with a focus on ethics. One of my favorite topics learning dynamics and the implications for educational institutions globally and regionally. And so, it is with great pleasure I introduce Dr. Jessica Parker, who is a forward-thinking educator, innovative researcher and accomplished entrepreneur, actively exploring the integration of artificial intelligence in education. She's a university lecturer and a successful educational venture entrepreneur who brings a blend of practical experience and academic insight to the evolving landscape of AI in academia.
Dr. Parker's research focuses on the potential of technology in education with a growing interest in generative AI. Her work investigates how AI can enhance research methodologies, inform teaching practices, and create more engaging learning experiences. And we also have Dr. Kimberly Becker. Dr. Becker is an applied linguist who specializes in disciplinary academic writing and English for research publication purposes. She has a PhD in Applied Linguistics and Technology and an MA in Teaching English as a Second Language. Her research and teaching experience as a professor and communication consultant, with experiences she's drawn from having taught at the high school, community college and university levels, are invaluable to us on the Seon180 episode. Her most recent publications are related to the use of ethical AI for academic research and writing and she's the coauthor of an eBook titled Preparing to Publish about composing academic research manuscripts. Ladies, Scholars, it is absolutely delightful to have you with me today and to be chatting to us on this very important emerging topic, AI in Education - Higher education. Thank you.
And of course I want to begin by asking you what is Moxie? Tell us how that started and what is the concept behind it.
[00:04:03] Speaker B: Moxie is a..well, first. Leslie Ann, thank you for having us. We're honored to be here to talk to you about this topic today.
As you know, AI is a really hot topic right now in the news media. There's a lot of news cycles about it, and something that Kimberly and I are both passionate about is helping academics navigate this new landscape and also balancing expectations with reality because we feel that what's being reported in the news is not very much a aligned with reality in terms of how we might use AI as academics. So, I like to just start there. In terms of Moxie, it's been an interesting journey. You know, Kimberly and I, we're not computer scientists. I would have called myself a non-technical person several years ago. And so, this is not somewhere we ever expected to be. Moxie evolved from a different business we were running together, which was an online-like virtual academic writing center for graduate graduate students. We were trying to solve the problem of how do we give graduate students support when they need it outside of the typical power dynamic that happens with their supervisors and their committees. And we were running into issues that honestly brick and mortar academic writing centers run into. And so, we were kind of in this struggle of how to make that business model work. And then ChatGPT came out. We started using ChatGPT3 early last year, in 2023. We immediately started diving into research and exploring its capabilities and limitations for automated writing evaluation. Because we thought, is this a technology that could give more of our clients access to the writing feedback that they need? Does it work well for those purposes? What are the limitations of it? And then that really started our journey with Moxie. It wasn't long after that before I think it was last summer, we started building our AI tools, using an application layer between the user and the large language model. And then the market responded, people liked our tools. And so, then we founded Moxie last November. And we consider ourselves a hub, so we're not just a tech company. We're really passionate about education and AI literacy and research. So, we consider ourselves as more of a hub where technology is just one component of what we do, where we really feature the human at the center of our work and consider the AI technology a tool to accomplish a goal instead of being tech first.
[00:06:47] Speaker A: Right. And Kimberly, what's the market that's most attracted to your platform?
[00:06:56] Speaker C: Well, so I think graduate students were who we had in mind, but what we're kind of finding is that sometimes graduate students may not know enough about what they need to use our tools in, in fully, like, effective ways. And so, faculty have been very attracted to us because we provide AI literacy training. We keep up with the research, we know what the newest, you know, best practices are around ethical AI deployment in higher ed. We're talking about critical literacy and bias. We're concerned about the way that AI can amplify that. And so I think ultimately right now I would probably say faculty are sort of hearing our message more, more than. It's not that graduate students don't hear the message, but I think sometimes they may just not know exactly what they need to do. And so, they really need a little human guidance in there. But the faculty, they get it.
[00:08:01] Speaker A: It's good to know that there is an understanding for its utility. So, is this more geared towards graduate students or can undergraduate students or even students at the high school level get access to that type of platform?
[00:08:18] Speaker B: That's a good question. Our target audience is very niche. We really primarily work with master's degree, mostly doctoral degree post-docs, and faculty researchers. All of Moxie's generative AI tools are geared to support researchers at various stages of the research process. If you think about the scientific method, like conceptualizing an idea, developing a hypothesis, searching the literature, synthesizing, analyzing data, all those stages of research are what we focus on in Moxie. We do not currently have any products to support really anyone outside of researchers. Someone who's learning research or actively like an experienced researcher.
[00:09:04] Speaker A: Right. And so, you are really using AI as an integrative tool into this higher-level education process. Can you tell us a little bit about the tools or the research tools that you offer so that graduates can understand a little bit more about how it's useful to them?
[00:09:26] Speaker B: Yes. So, we organize our tools according to the IMRaD framework. Researchers who are, you know, commonly, are reviewing research articles recognize the MRaD framework as the structure of a research article. You have the introduction, the methods, the results, and the discussion and conclusion. And we organize our tools in that way because typically as researchers, you start with the introduction and the lit review. Those are sort of combined together, but we do break those into separate categories because the lit review is sort of a behemoth of a task to undertake.
So that's how we organize our tools according to the major sections of a research article. We also give our users access to all of the frontier AI models so that they don't have to like log out of Moxie and then go access maybe GPT4 or Claude or Gemini separately.
[00:10:21] Speaker C: And so, people come to the table expecting the technology to act in a certain way, and they don't realize it's really a negotiation. It's not you give me this and I output this exact expected thing. It's you ask me for something and then we iteratively negotiate that until you get what you want from me and the AI - and if you've used AI, you know, this there, it's almost like a codependent. It's like very, it seeks to please you, now, it will set boundaries, but it's very eager to please you and give you what you want. And so, you can use that to your advantage. I mean, with ethical boundaries, of course, but you can use that to your advantage to get what you want. But people often come to the table treating it in, you know, like it's software, like it's rule based. And it's not rule based. It’s predictive.
[00:11:16] Speaker A: It's predictive... And then a lot of what you get out of it, I guess, is from that exchange and regenerating and sort of rearticulating what you want to get out of it. Correct?
[00:11:30] Speaker C: Right.
[00:11:30] Speaker A: Yeah. Okay.
[00:11:32] Speaker C: Yeah. So, people who are good communicators in general, with other humans are great AI practitioners. You know, we think of it as a technology. Maybe people in STEM computer science would be good at it, but actually, it's the humanists who are quite good at using AI because our communication skills are at the top of their game. That's really what it takes.
[00:11:56] Speaker A: I got you. Because a lot of times, you know, students may think, oh, this is a nice way to circumvent long hours and hard work. Right. Let's just go to Moxie. And Moxie will fix it instantly, you know, like a Band-Aid on a cut kind of thing. So, I'd like you to tell us why it doesn't work that way, and obviously there are advantages to using Moxie for research and academic purposes. Tell us some of the key things that they can gain from that.
[00:12:28] Speaker B: You know, it's interesting that you asked. I'm glad that you asked this question, Leslie Ann, because this is one of the misconceptions that we're trying to educate people about. AI is being sold to us, and the language being used by these large corporations are all around speed and efficiency and productivity. I remember when Microsoft first launched their AI educational product within Copilot, in the demo, they use the term teaching speed and learning speed. And I found that so interesting. And I think it's setting students up for failure because learning is supposed to have friction. It's supposed to be challenging. And then when you apply that to research, research is really hard. It's complex. I mean, there's a reason why only about 2% of the global population has a doctoral degree. It's not supposed to be easy. It takes a lot of time to develop your ideas and test those ideas and develop your voice as a scholar and become an expert in your field. Like, that's the whole point.
[00:13:34] Speaker A: That's the point.
[00:13:35] Speaker B: That's the point. And when it's being, this language is being used to sell us these products, it's very deceiving to the user. Moxie is definitely not a magic, you know, potion that the user just takes and then suddenly their research is complete. And that's what we're up against. A lot of the tools that we're seeing in the market that are being marketed to our same users are all about like, we'll complete your sentence, we'll finish your paragraph, just type in a sentence and we'll write the paper for you. And we take a very different stance. We add guardrails to moxie to discourage. We can't prevent it completely because again, it's not a software, but we highly discourage the AI from writing for the user or revising for the user. Instead, we program our models to act as thought partners and collaborators, much like having a human expert alongside of you who's challenging your assumptions, challenging your ideas. And that inherently involves friction. And so sometimes people don't like that when they're expecting something that's going to just do it for them.
[00:14:47] Speaker A: Right.
[00:14:48] Speaker B: I heard this really interesting description of the Internet that I think can be applied to some of our expectations around AI. And the Internet was described by—there's a book I'm reading right now, it was described as being frictionless and decontextualized. And so, when as a society we're used to convenience and speed and getting things right away and not having any friction, and now we have AI that's being sold to us that way. And I think that is the biggest challenge right now for education, regardless of its higher education or K through 12 is like, how do we balance that with this technology and resist that urge to create a friction-free tool because it sends the wrong message about learning.
[00:15:37] Speaker A: It does, you know, and it's something that intrigued me quite a bit when I was reading about you guys and your platform. Moxie, and Kimberly, for me, one of the questions that I had in the forefront of my mind is that we're faced with a generation who wants everything instantly. It's sort of like everything has to be now for now. And if there's any way we can circumvent the long route and hard work and tireless hours, let's just go for it. And so that sort of broaches into the area of the ethics of AI in education. How do we use it wisely, responsibly, ethically, so to speak? And how do we convince educators and education institutions that it's okay to use a moxie platform? So, I'd like to hear some of the ethical considerations that you like to share with me in terms of how we navigate that in classrooms and research and graduate studentship. Can you tell us something about that?
[00:16:42] Speaker B: Yeah.
[00:16:43] Speaker C: So usually the place that we start when we are talking to faculty is acknowledging that this is a fear-laden time. This is scary. We're hearing reports that, you know, this technology is really going to change the workforce, the professional workforce, in huge ways. And eventually I believe it will. And we're not there yet. Everything is so emergent. I think that's the thing. People are seeking rules. They want to know what's right, what's wrong. We'll have a webinar and graduate students will say, but is it okay? Is it right? They want to know. I really do believe most people want to do the right thing. They just, when they get pressed and they get, you know, stressed out at the last minute, they make suboptimal decisions. But one of the things that after we acknowledge the fear, is we, we talk about the spectrum of use. We say the choice is not a binary choice. You use it or you don't use it. There's a lot of different ways you can use it, and the degrees to which you can use it are different for certain use cases. So, we have a, a spectrum diagram that we use where we talk about human only writing versus fully synthetic writing, fully AI generated writing. And then along the line in between, there is like human in the loop writing or machine in the loop writing, which are just terms to kind of indicate to what degree the human is or is not involved. And so certain use cases for certain assignments, for example, may lean more in the direction of human-only writing. Something like a personal narrative, a reflection on an event that happened to you. Obviously, unless you have some sort of disability, there's no reason for the technology to be involved. Right. On the other hand, another extreme example would be if you're writing about, let's say, a statistical test, the results of a statistical test that are always written about in the same way, using kind of formulaic language. Words are always the same and the reader is expecting those exact words. That's a disciplinary technique in many fields and in science writing. And so, then you know, in between that, there are all kinds of different assignments where you maybe start with AI and then you revise with the human, or you start with the human, throw it to the AI and negotiate that meaning back and forth. So, the spectrum of use is really, really helpful for people to envision a world where we can ease our way in and we don't talk about it in black and white.
[00:19:23] Speaker A: Yeah, I got you on that. So, one of the things that fascinates me with this, as we're getting into using it more and more, is what is my professor thinking when I am presenting a paper or a thesis in which I have used AI as my collaborator? Am I going to be discriminated? Am I going to be applauded? Is it a mix? What's the trend? Do we educate the educators that this is okay and what are the principles by which they should be guided? So, myself, if I'm a graduate student, I don't feel prejudiced or harmed by the fact that I'm using this integrated tool in my research paper.
What do we do with educators?
[00:20:16] Speaker B: What I think is challenging right now is that educators are looking for sort of blanket policies from their institutions to help them understand what to do, because the reality is we're still early adopters. And many educators I talk to today, a lot of them still haven't really used AI, but they are expecting their university to provide some guidance around—I mean, my university sent out an email that wanted to put together a task force and create like a university wide policy. And the challenge with that is that it doesn't work because it's very contextualized. It depends on the learning outcomes. For example, I teach an academic writing course. I need to know that the students are able to understand those writing concepts and implement them in their writing. So, the way I use AI in that course is, initially I don't allow them to use it. I need to see a benchmark of their baseline writing skills without AI. The way I use it in that course, it's going to be very different than say, someone teaching a finance course where maybe they're fully allowed to use AI or not at all. It depends on the level of the learner. Like is this a first year, first semester student versus a senior who has already learned a lot of the concepts they need to know for their field of study? The blanket policies don't work. And the downside of that is it leaves it up to the faculty member to decide, and many faculty are not prepared. AI literacy is low. So, what we're seeing is the push is coming from the students, the demand is coming from the student side, and it's putting faculty in a tough position. And in response to that, what we say is talk to your students, like, have a culture in your classroom of transparency. Don't ban it, maybe don't have a strict policy, leave it a little open-ended and talk to your students in the beginning of the semester and have that transparent environment so that they feel comfortable coming to you and asking questions about when it might be appropriate to use it. And only then might you start to understand how they might be using it, because it might not be clear to you if you're not using it yourself.
With that said, Kimberly and I have worked on this AI literacies framework that is very general. It's not specific to a discipline or a learning outcome. It's more of an AI literacy perspective that can help educators think about the gradual integration of AI in a course, if it seems appropriate. And it starts with functional AI literacy, which requires the educator first to use it and understand the capabilities and limitations before they can help their students understand. Then there's critical AI literacy, which is layered, but at the most basic level, it's learning how to be critical of the outputs, which also includes aspects of bias. And then there's rhetorical AI literacy, which is like co-creating with AI and understanding how to create a product with it after you've already had those sorts of functional and critical skills that are developed. And we recently released an article, published an article that I co-wrote with my students that shows how AI can be integrated according to Bloom's taxonomy. But again, that was integrated with a doctoral level student in a writing course. And that's where it becomes challenging is it requires a lot of thoughtfulness about what are the students, what is the knowledge they're coming to the table with, what are the learning outcomes for this course and what are the skills needed? What are the competencies needed? And it's going to vary significantly the way AI is used based on the answers to those questions.
[00:24:05] Speaker A: Yeah, your response triggers like a hundred questions in my mind all at the same time because I'm actually seeing Moxie and yourself and Kimberly almost in the role of advocates. Because if you're going to expand the use of the platform and get people to understand how it can be utilized well and effectively, it almost means getting it up at that very hierarchical level of universities and colleges and so on, to say, you know, these are the benefits, but they will operate within, you know, these certain constraints or guidelines. So, it's not just students hammering because pretty soon, you will have a barrage of students wanting to, you know, push forward on this. So, do you think that's a role that might be something you will have to take on in order to get this as you know, this is an everyday tool to use in higher education, like a pen or computer.
[00:25:05] Speaker C: Yes. And that is what we, that's, that's been a big part of our last few months is doing webinars. People ask us to go over our literacy framework so that they can start to understand what this looks like. We do a lot of…And I encourage people to play with it, to look at it as like, I mean yes, it is a very serious technology, and yes, it has very serious consequences in society, but at the outset it, it can be quite fun to engage with it because it is so unique and different and so, and you're not going to break it. So, you, you don't have to worry about, you know, like if you're learning Microsoft Excel, you also can't break that, but you definitely can… you come up against like there's a right answer and there's a wrong answer and you either are, are not getting what you want out of it, but with AI, you can kind of play around with it and see what works for you to get an idea and into… And there's no, you know, people always want to know where do I start? And usually what we say is like, well, just what is a problem that you haven't been able to solve that you want a solution for? The simplest thing is just brainstorm with an AI of how you could solve that problem. Start there, Just talk to it like you would an advisor. And be critical though, know that it's, it's going to say what, whatever, it's going to say what you want to hear probably, and you can push back against it. But getting people to just at least start thinking about being, being open minded, I think that's the big challenge because we can't change the way people think. And as Jessica said, faculty are, if students are coming to them, then they're put on the defensive.
And so, it's already a dynamic power between a student and a professor. And now you've got students coming to the table with this. I mean honestly, if they know more about the AI they are bringing to the table more power in that situation. And so, you know, at least knowledge. And so, the faculty member, then if, if they are equipped with some functional literacy, maybe they can say, well, tell me how you would use it. What would it look like for you to use ChatGPT, for example, to do homework in this class? What would that look like? And maybe that can open the dialogue in such a way so that students don't quite feel so…they're very worried. There's a lot of fear with faculty and there's a lot of fear with students because of AI detection.
And so, yes, students are quite concerned that they want to be…and it's a serious accusation of academic, you know, dishonesty. And so, it's, yeah, there are big consequences. But I do think we can kind of start at a lower level of just play with it, just see what you can do.
[00:27:58] Speaker A: Right. So, I like the fact that it's, you know, interactive, it's engaging. That should be, you know, good for, for students. So, you're pushing yourself because it's responding to you and hopefully, you're getting better and better and ultimately that's what you want. What is of interest to me now is how do we utilize this and how does Moxi assist us in the Caribbean? Operating within our constraints, limited resources, limited access to resources, balancing work and study, which really is a global issue anyway, but we want to be competitive, global. A lot of Caribbean students want to apply to universities in North America, in the UK and in Europe. So, they're always searching for ways to stand out or to at least be at that international level. Can Moxie help our graduate students, for example, in the Caribbean? And what ways do you see that being done?
[00:29:08] Speaker B: The first thought I have that's outside of, not Moxie, in particular, is just related to education. And I like to remind people, especially when it comes to higher education specifically, higher education inherently has value. The degree you have has value because it's valued by the workforce. And if the workforce no longer values the degree, the skills of the students that are coming out of the higher education institutions, then it inherently devalues those institutions and what they're offering. And it's always been a challenge for higher ed to keep up with what's happening in the workforce. This technology is very different because of how rapidly it is evolving.
Case in point, like, there are certain features that we looked at developing within Moxie, and over three months, the price went down by half, I mean, three months, and that's what we're seeing. And so, it becomes really hard to make decisions about. Just as a tech company…
[00:30:12] Speaker A: Apply to universities in North America, in the UK and in Europe, so they're always searching for ways to stand out or to at least be at that international level. Can Moxie help our graduate students, for example, example in the Caribbean? And what ways do you see that being done?
[00:30:35] Speaker B: The first thought I have that's outside of, not Moxie in particular is just related to education.
And I, I like to remind people, especially when it comes to higher education specifically, higher education inherently has value. The degree you have has value because it's valued by the workforce.
If the workforce no longer values the degree, the skills of the students that are coming out of the higher education institutions, then it inherently devalues those institutions and what they're offering. And it's always been a challenge for higher ed to keep up with what's happening in the workforce. This technology is very different because of how rapidly it is evolving.
Case in point, like, there are certain features that we've looked at developing within Moxie, and over three months, the price went down by half.
I mean, three months. And that's what we're seeing. And so it becomes really hard to make decisions about, just as a tech company. So, then I think what's happening in the workforce like that, rapid adoption, but also the lack of literacy and what's being marketed in the workforce. And then how is higher ed going to keep up with that to make sure that we are preparing our students for an AI integrated world? So that is a huge challenge that we're grappling with. I mean, I primarily, I still teach, I work with healthcare students. These are students who already have a terminal degree. So, they're pharmacists, physical therapists, nurse practitioners. They have a professional degree and they're getting a terminal degree for maybe leadership reasons within their organization. We're seeing a lot of transformation already happening in healthcare and we can't keep up. My curriculum doesn't currently reflect the changes that are happening in there. So, it's like they're telling me what they're seeing in their organization.
That's not new in higher ed, but the pace at which it's happening is unprecedented in any country. I would be thinking about how are our institutions preparing our students for the workforce that they are trying to enter. And so, if you're in the Caribbean and you have a higher ed institution there and you know that your students are looking for jobs in North America, what's happening in the workforce in North America? What are those skills that are valued and how do we make sure that we're preparing our students by adapting and transforming our curriculum? And that cannot happen if faculty and administrators are not in the know in understanding how to use it and leverage it. So, partnerships between industry and institutions I think is imperative. If those partnerships do not exist, they need to be formed to ensure that you have someone on the inside who's helping leadership make decisions about what needs to happen within their curriculum.
The added complexity to that would be professional standards, let's say within healthcare or law. So, there are institutions that manage that determine what those standards are. There also needs to be collaboration there because you have accreditation, there's all these institutional bodies and then we have industry that really need to work in partnership to make sure that we're preparing our students for that AI integrated workforce.
That's what immediately comes to mind for me, whether you're in the leader in higher ed in the Caribbean or elsewhere.
The idea of resources is incredibly challenging. The amount of energy and physical resources that is needed to build these large language models it’s astounding. We're already running into issues in North America in terms of, I mean, I was just reading about how OpenAI has a $5 billion loss this year despite all of the investment happening from Microsoft and other companies. They're trying to negotiate building new data centers and I saw reports where somewhere that they were trying to build this huge data center because the servers that are needed for all this computer requires that physical space for the servers - how it was going to take seven years to build that infrastructure. So, we're already starting to see significant limitations in the resources needed to develop the technology. And what that means for developing countries is hard to wrap my head around. Like, I think about the digital divide and is that just going to get that much worse? Because most of the companies that are developing this technology are in North America. And I think that that's really difficult. Like I definitely do not have the answers to that… doing consulting, you know, but it's really hard. Like, I do not envy any government leaders or policymakers who are trying to navigate this.
[00:35:44] Speaker A: Yeah, it's a very treacherous terrain. I think one of my fears within the Caribbean Community context is that that sometimes we're guilty of, you know, rigid beliefs and practices through precedent. You know, we're accustomed to doing things a certain way and the old-fashioned way, you do your books and it's all human input and there's no integration of technology except, well, we use A computer now to, you know, type up our papers and email them to our professor and so on. So, I see walls of resistance. Walls of resistance from the universities, the institutions, and the hierarchy in terms of using, and the use of this by students. And I also see students having an innate fear. Am I prudent in pursuing this course of study or in pursuing this master’s because AI can eventually take over my job or aspects of my job? So, what kind of encouragement or advice or defense you have to those on either side of this resistance and fear?
What can we do to give them assurance that it's okay?
[00:36:58] Speaker C: My favorite, I guess anecdote about this comes from a cognitive scientist named Allison Gopnik. She's at Berkeley and she just published, well, she's published a lot of research on AI and cognitive connections between the brain and the technology, but recently she took her research and turned it into a very, very approachable and relatable blog post using the folk tale of Stone Soup. I don't know if you're familiar with this one. I don't know what the…
[00:37:38] Speaker A: That one is...No, not that one.
[00:37:39] Speaker C: Okay, so Stone Soup is, the story is there's a village and there's a famine in the village. And all of a sudden, these travelers come through and on the outskirts of town, they have a big pot. And the villagers get curious. So, they go out and they say, what's in your pot? And they say, oh, this is stone soup. And they said, what do you mean? Well, it's just water. And then we put some stones in there to give it a little flavor. And the villagers say, oh, that's interesting. What might make that soup better is maybe some carrots. I have a couple of carrots in my house. And so, they bring a carrot. And then the next set of villagers gets interested and they say, oh, yeah, that's, maybe I may have a couple potatoes… And then the next one is like, I have some chicken broth or whatever, you know. But the village comes together and they create this soup that then can feed lots of people.
So, Allison Gopnik says is like this. We all have to bring something to it in order to use it, to get something out of it. And it's being called Gopnicism because it's such a great metaphor for the social technology that it is. It really doesn't have, at this point, much use unless a human is kind of driving and giving it direction. I think now that may change soon once the agency gets involved when the computer, when the AI can have agency to go out and do those tasks. But as it is right now, the important thing is to think what can we bring to it to make it better in our context? And how can we ask other people? I think asking other people is really important because then you see, you develop a little compassion. Oh, that person's using it for this instead of judging them for that, for example. You start to get curious about it. Well, maybe it's possible that they're using it ethically. Maybe that's within the realm of something I can believe. And slowly people start to, as a community, gain the literacy together. And, that's I think, the power of it. And and lots of people are talking about Gopnicism now as a way to think about.
Actually, I saw the movie, The Wild Robot the other day, which is a new movie out in the U.S. It was in line with this idea that like we, technology can't really exist without us. And in some cases, we have become so dependent on technology that we can't live without it. And so, it's becoming this socially like symbiotic thing that is developing among us. So, I don't know if that's helpful or too farfetched, but I really liked it. It helped me have a little bit of hope because sometimes I do get a little existentially concerned about what this all means.
[00:40:36] Speaker A: Yeah, it's a great way to end our conversation. A great message, that it's about the collective and it is integrated, both human and technology. The AI…it is also dependent on the input and output from each other. So, it's really a marriage that has to evolve responsibly for the good of mankind, so to speak. Jessica, some final thoughts on this. Perhaps how can Caribbean students subscribe to the Moxie platform? Let's get this going.
[00:41:10] Speaker C: Sure.
[00:41:11] Speaker B: Well, we have a student discount. We have a large international following, group of users, if you will.
If you go to moxielearn.ai –M O X I E – learn - .ai, at the bottom of the website or on the pricing page, you'll see a student discount. You'll get 50% off Moxie. We have several pricing tiers including a free trial. We're about to release MOXIE 2.0, a very much improved version of our platform. We're very excited about that, that we're going to be adding new features soon.
So yeah, check us out. And we, Kimberly and I love to interact with our users. So please do not hesitate to reach out to us and ask questions. I mean we are a tech company, but we are humans, and we love the human part of this.
[00:42:03] Speaker A: Good, good. All part of the wondrous benefits of humanity overall, understanding that while we do need the machines and the technology, human beings are still quite important and pivotal in this role. So, thank you both Dr. Becker and Dr. Parker for an extremely enlightening conversation on this emerging field, AI, that tickles the imagination and the mind. And I'm sure a lot of our students in the Caribbean community and beyond would be curious and interested in the MOXIE platform. And I wish you both all the very best of success with this platform and for it to continue to grow to great heights. Thank you for being with me this morning.
[00:42:47] Speaker B: Thank you for having us.
[00:42:48] Speaker C: Yeah, it was a pleasure.
[00:42:50] Speaker A: Thank you. Thank you.
We do need the machines and the technology. Human beings are still quite important and pivotal in this role. So thank you both Dr. Becker and Dr. Barker for an extremely enlightening conversation.
I'm sure a lot of our students in the Caribbean community and beyond would be curious and interested in the MOXI platform. And I wish you both all the very best of success with this platform and for it to come continue to grow to great heights. Thank you for being with me this morning.
Thank you. Thank you.