Not As Crazy As You Think Podcast

AlphaGo the Movie, Big Data, and AI Psychiatry: Will Humans Be Left Behind? (S4, E14)

October 15, 2022 Season 4 Episode 14
Not As Crazy As You Think Podcast
AlphaGo the Movie, Big Data, and AI Psychiatry: Will Humans Be Left Behind? (S4, E14)
Show Notes Transcript

In the episode, "AlphaGo the Movie, Big Data, and AI Psychiatry: Will Humans Be Left Behind? (S4, E14)," I give a review of the film AlphaGo, an award-winning documentary that filled me with wonder and forgiveness towards the artificial intelligence movement in general. SPOILER ALERT: the episode contains spoilers, as would any news article on the topic as it was major world news and was a game-changer for artificial intelligence.

DeepMind has a fundamental desire to understand intelligence. These AI creatives believe if they can crack the ancient game Go, then they’ve done something special. And if they could get their AlphaGo computer to beat Lee Sedol, the legendary historic 18-world champion player acknowledged as the greatest Go player of the last decade, then they can change history. The movie is suspenseful, and a noble match between human and machine, making you cheer on the new AI era we are entering and mourn the loss of humanity's previous reign all at once.

And with how far AI has come, is big data the only path to achieve the best outcomes? Especially in regard to human healthcare? And what about the non-objective field of psychiatry? When so many mental health professionals and former consumers of the industry are criticizing psychiatry's ethics, scientific claims, and objective status as a real medical field, why are we rushing into using AI in areas that deal with human emotion in healthcare? Because that is where we have a large amount of data.  With bias in AI already showing itself in race and gender, the mad may be the next ready targets.

#DeepMind #AlphaGo #DemisHassabis #LeeSedol #FanHui #AIHealthcare #westernpsychiatry #moviereview #psychiatryisnotscience #artificialintelligence #bigdata #globalAIsummit #GPT3 #madrights #healthsovereignty #bigpharma
#mentalillness #suicide #mentalhealth #electronicmedicalrecords

Don't forget to subscribe to the Not As Crazy As You Think YouTube channel @SicilianoJen
And please visit my website at:
Connect: Instagram: @ jengaita
LinkedIn: @ jensiciliano
Twitter: @ jsiciliano

Hi guys and welcome. This is Jen Gaita Siciliana artists memoir writer, bipolar psychiatric survivor and your host of not as crazy as you think podcast, the place that offers an alternative perspective on Mental Illness highlighting creativity, non conventional healing and breaking on through to the other side. If you're ready for a new narrative on the mental realm that celebrates crazy and cool without penalty, then not as crazy as you think is for you. Hello, everyone, thank you for returning to not as crazy as you think podcast. This is Jen Gaita Siciliano. And today, I wanted to share with you my experience after watching the movie AlphaGo. It's an award winning documentary, it won the Tribeca Film Festival in 2017, the London Film Festival, it's a great piece, the director is Greg Kohs. And I really I encourage everyone to check it out. And I'll tell you why. Because for someone who has an enormous anxiety about the oncoming AI explosion, and I will tell you and explain more why, I've talked about this before. It's because I am in one of the groups that is highly biased against, which is the mad people, okay? The people who have been labeled in our society through the social construct that we like to pretend is medical, but it's really just a social construct that keeps everyone's behaviors under control, which is that of psychiatry. So anyone who has been labeled might, like I do, have an enormous amount of data on them, more so than just your fellow friends on social media. And I'll talk about that in a second. So I really, I'm trying to be open, because I know in order to live in this next phase of our what they call the Fourth Industrial Revolution, right now with this artificial intelligence explosion, I have to be prepared. You know, if I have to learn AI in order to work in the world, or if I have to try to think about how a machine is looking at me in order to communicate and get something from that machine. I don't know what this world is going to be like in the future. I don't know how many robots there are going to be. All I know, is that with how things are unfolding, as humans begin to die off, there will probably... you know, we talk about overpopulation on this globe. But with the way Elon Musk is talking about rolling out these, you know, Tesla bots, and all these other planned immersions between robot and human society, there's only going to be millions and millions of robots walking around. I mean, in time. And if we choose to live in that world, and we accept that, and we love that into being, then who am I to say anything? I'll be in another dimension at some point after I die. But this is happening so quick. And I'm only 51, that it could happen within my lifetime, and certainly within my son's lifetime. So it is a concern for me and it should be for many. And yet this movie, the reason why it's so good, is because it could take those concerns, if only for a moment, put them in a little box with a little closure on it, and just enjoy the movie. Because I fell in love again with people, who have that nerdy masculine math and computer geek mind. And I came from that thread. Oddly enough, math was my best subject in high school. And I studied some computer programming in college. So if I wanted to go in that direction I could have, but I had such an artistic sensibility about who I was that that could not be ignored. That was part of my fabric. So I put down those things. But you know, I mean, I'm a Trekkie. You know, I mean? I used to go all the time and get dressed up to these things. I've watched every series, I think twice each. I love the idea of, I guess, living in a technological world that's safe and prosperous and filled with opportunity, right? Don't we all? And these people have a fundamental desire to understand intelligence. That's not evil. It's noble. It's creative. It stems from passion. These AI creatives at DeepMind believed that if they could crack Go, then they had something special. So, DeepMind is the British company that was involved with this AlphaGo creation, cofounder and CEO Demis Hassabis, he grew up in North London in the 1970s, became a game designer by the time he was 16. And then he went on to study computer science and neuroscience. So he really does bring all of these ways of thinking into this world of AI. So Go has been played for 1000s of years, and it's popular in China, Korea, Japan. And it's one of the simplest games and also the most abstract ancient game. Compared to chess, the number of possible moves is huge. In chess, it's 20. In Go, it's 200. The number of possible configurations of the board is more than the number of atoms in the universe. And as we're seeing with the James Webb Telescope these days, that could be infinite, which is another thing that could blow your mind. We can't have enough computing power to calculate all the possible variations of Go. The professional Go player makes moves, then, sometimes because they only feel right. So that's an area of our human intelligence that we really do identify as intuition, something that can't be programmed. So the DeepMind team needed a challenge and they needed to play a professional Go player. And they chose Fan Hui, who was the European Go champion of 2013 to 2015. Born in China, living now in Europe, Fan Hui received an invitation to play Go against this AI program, through an email by DeepMind. And the people at DeepMind asked Fan Hui to come and meet them, and they wanted to show that they were serious people during proper research. And indeed they are. I mean, Demis Hassabis, listening to him is very, very satisfying, because you know he has a good heart, you know he's coming at this from a good spot. He believes essentially, that once we understand intelligence, then his company can recreate it artificially. And then they can use this technology to solve all sorts of other problems in society. Now, he believes building memory and creativity into their AI systems can do this, and he uses his neuroscience background in a couple areas, okay, in building these systems. One is for inspiration to develop new ideas for algorithms, or architectures, or representations, how the brain uses these things. And validation, proof of concept is the brain if we could kind of like align it to assimilate something that the brain does, then we might have something. You know, taking the essence of our intelligence and distilling it into algorithmic constructs, okay, I never liked that idea. But to solve things that are just sometimes very complicated for the human brain, one or two, or even a team of people to figure out. I mean, there's general purpose learning algorithms that are needed. And there's areas in climate, in transportation systems, in energy, in the production of sustainable foods that feed the world. And, of course, healthcare systems, which we'll get to. And certainly, there's been a lot of talk lately since the Global AI Summit got off its feet about a month ago now. And really, anyone should just take a look at it. It's hours and hours of really, really insightful, intelligent people from around the world. And, oddly enough, there wasn't that many people representing big data there, or big tech, and many more startups, showing that, you know, a lot of the developing nations they have a lot of hope in AI. Because they see it as helping them get off their feet in areas where they may have, you know, a lack of skilled employers or a lack of available workers. And it makes sense, you know, for all the time that has gone by where industry has gone so far ahead and has left developing nations behind, you know, this may be an area where they may be able to leverage the AI so that they could get back into having a chance at a prosperous society. So when you hear these people and you say, from all nations of Africa, from all areas of the Middle East, of India, it's humbling. Because I say, who the hell am I as an American whose been given everything along the course of my life, right? Who am I to say that we can't be resourceful and try to adapt and and try to use our skills, the knowledge that we have, to be ingenious and come up with these new ideas. So I really am trying to hold back on my judgment. And yet, I will share with you my concerns by the end of this episode. So going back to Alpha Go, Fan Hui thought it would be easy. It's just a program, he said. Since much of it is played by human intuition, he didn't think that it was possible. How can you program that many, you know, millions and millions of possible configurations into a program? But when the time came, and it was recorded, and he agreed to that, Hui lost all five games. And he said he felt something strange. He lost to a program, and he didn't understand himself anymore. This was the first time in history that a Go professional human lost to a program. He said, at least he was happy to be part of history. Said astronomer Michael Reese. It really is a big leap forward. There's a big difference between the way the IBM computer beat Kasparov, which was programmed by expert chess players and the way the go playing computer more or less learned itself. And of course, he's referring to Garry Kasparov in the 1990s, the chess virtuoso whose matches against the IBM supercomputer Deep Blue was also very shocking. He had won once but then lost in the rematch. And since then Kasparov has gone on to become something of an AI advocate, and an authority. He published a book called Thinking: Where Machine Intelligence Ends and Human Creativity Begins, which was released in 2018. He had first hand experience of being humbled by computer, and Fan Hui, is similar in that AI earned his utmost respect. So how does this system learn differently? They trained AlphaGo with deep neural networks that operate on big data involving machine learning. They can go beyond what humans have already achieved, and then make breakthroughs from there. Demis, who's the co founder, he too was a professional game player. He played chess for the England team as a youth. And when he was 13, he was the second highest rated player in the world. So there is a deep affinity he has for game playing. He explains that through self playing, reinforcement learning, it plays against different versions of itself many millions of times, and then learns from its errors. So now DeepMind needed an even more impressive challenge. They needed to go further. And of course, who else would you go to but the global champion, Lee Sedol, the legendary historic player acknowledged as the greatest player of the last decade, and long reigning global champ. He has the highest rank in the game as a 9 dan professional. Fan Hui, only had a 2 dan ranking. So they felt it was the best challenge. People who honor the game go believe that Go players are very smart and very noble. So the fans were behind the human winning the whole way. With 18 World Championships under his belt, DeepMind had their team working around the clock to beat Lee Sedol, up to the minute it played. It was a collective force of researchers, engineers and programmers to make AlphaGo stronger. They even got Fan Hui to help train the machine. After continuous play in preparation for the match, Hui found its weakness. When the machine enters a lump of low knowledge, it became sort of delusional thinking he was playing well when he was just completely making mistakes. In order to prevent embarrassment for what they claimed they had, which was the machine that can beat the best Go player of the last decade, they had to crack down and make sure they did everything they could to give it the best chance to win. By the time the match came in Seoul, South Korea, it was front page news, not only because he was a national figure there, but because of the nature of the news. Lee Sedol was playing for the humans. Before the match, Lee said, "I believe that human intuition is too advanced for AI to have caught up. I'm going to do my best to protect human intelligence." And this was the state of his understanding of the task before going in. And it was difficult to watch. Fan Hui, he's a wonderful character throughout the film, who adds the most insightful view of a humble champion. He says,"Before this he played tournament for his country, for himself. But this time he played for the human." Aja Huang, one of the members of the DeepMind team who sat across from Lisa doll placing down the black or white pieces on the board for AlphaGo said that Lee's mental strength was clear and then it must have been strange to play this non human opponent that he couldn't read through human intuition. Myungwan Kim, a 9 dan professional from the Korean Baduk Association, said,“AlphaGo plays very well, like a top professional- he’s aggressive.” The commentators throughout found it interesting that the AlphaGo used 1 to 1.5 minutes in any situation, which was unlike a human player. He played as if he knew everything already. On game one, self doubt began to creep in for Lee Sedol. And it was difficult for the announcers to watch as Lee looked like he was almost in panic. "With humans, you can have an exchange with feelings. But with AlphaGo, you feel nothing," said Fan Hui. The AlphaGo looks almost 60 moves ahead. People were astonished that Lee was losing though. Aja said, "I can feel his pain... like he was. He couldn't believe you know, he couldn't accept it. It takes time for him to accept the At this point, I would usually get completely angry thinking outcome." why do these humans need to make machines better than themselves? But AlphaGo is human created. And with it being human created, it's ingenious. The data that it learns from was created by humans. Humans created the learning algorithms. They created the search algorithms. All these things have been created by humans showing ingenuity and creativity. How can you stop that from happening? There must be some goodness in it. The win was a breakthrough for AI. The research team received enormous media attention. This is a special moment for them. It's not normal for researchers to receive that kind of media attention. For computer scientists, it is more important to do their work behind the scenes than to be in the spotlight. So it was good, it was a good thing to see that these humans were getting attention for the ingenuity. But the loss was a huge shock and made world headlines. 280 million live viewers watched worldwide with 60 million of them in China alone. As it plays AlphaGo tries to maximize its probability of winning, but it doesn't care at all about the margin by which it wins--a very different way of playing from humans. Midway through the movie in game two, Lee Sedol went to take a smoking break when things got stressful and he needed to gather his wits, something he is known to do. During his absence, though, AlphaGo made its move. And this move is now the infamous move known as move 37. It was an original move, a move that all commentators unanimously agreed a human would never make. The AI went beyond its human guide and came out with something new. Regarding this move, when Lee came back into the room and sat down and saw it, you can see that he was shocked. Later he said, "I thought AlphaGo was based on probability calculation, and that it was merely a machine. But when I saw this move, I changed my mind. Surely AlphaGo is creative. This move is really creative and beautiful. A really meaningful move." And in getting philosophical he then stated,"What does creativity mean in Go?" During and after the game, there was a heavy sadness on the floor. Even reporters slipped into melancholy. And Lee was nearly speechless. After three straight wins, the commentators were wiped out. It was heartbreaking to watch. It was as if our humanity depended on Lee winning. And with him losing, we were considering losing the race to AI as well. Maddie Leach, who worked on the DeepMind team said, "I couldn't celebrate. It was fantastic that we had won. But there was such a big part of me that saw this man trying so hard and being so disappointed." After the third game in Lee's scheduled press conference, he pathetically began to apologize to the humans. "If I had been able to play better or smarter, the results might have been different. I think I disappointed too many of you this time, I want to apologize for being so powerless. I've never felt this much pressure, this much weight, I think I was too weak to overcome it." And in his characteristically soft voice, he stated, "I also feel so bad for the people who have supported me." The angst was overwhelming to watch because his opponent is non human, and can't be figured out. How can a man beat this machine. But as fate would have it, and it so perfectly fit into the epic hero tone of the movie, Lee came back on game four, and made magic happen. It was like they were given one last chance to feel dignity in the face of this reality. And it was so beautiful, because there's nothing they can do. But experiencing that. Well, that was just a small gift from the universe saying, "Aren't you happy you went back for game four,?" Because he was considering not even going forward. "You didn't give in. You went back and you played. And you gave everyone hope." And that perseverance and stamina and mental strength--that's the thing that makes us human. If we can rise in the face of that, then even if we lose, we still maintain our humanity. So after 77 moves, move 78 on Lee's part made the difference. After this unexpected move, a move Lee described as being the only playable move, but that AlphaGo calculated it would have only been played one in 10,000 times, things began to change. And the team got worried. AlphaGo started playing crazy moves, ones that no one understood why. They were inexplicable human moves and can only be seen as mistakes. The responsibility and burden Lee felt to keep the game going was incredible. And when the AI finally resigned, everyone went nuts with joy. But Lee believes that's because they were feeling helplessness and fear. He states, "It seemed like we humans are so weak and fragile. And this victory meant we could still hold our own. As time goes on, it will probably be very difficult to beat AI." He stated, "But winning this one time. It felt like it was enough. One time was enough." With everyone so happy, chanting, celebrating, the end of the world or the end of humanity's demise as the most intelligent creatures on earth was put off for another day. They saw the light. That realization that we still have time should be the one thing on everyone's minds right now. Because Go is not a huge game in the US, many of us never heard of this feat. But since then, DeepMind has been getting stronger, and now is starting to teach without human data sets, completely independently, where it teaches itself new games without the human equation. But what does this mean for AI in healthcare? It's all about big data. All artificial intelligence is, is a set of algorithms and their data source or their, I guess, knowledge base, comes from big data. But there are so many problems with big data. From an article from the International Journal of Bipolar Disorders in 2015 called"Big Data Are Coming to


A General Introduction," they explain how researchers commonly use big data to look for correlations. But the high dimensionality of big data creates analytical challenges, and new techniques are being developed to accommodate these issues. However, if these issues are ignored, and the assumptions of classical statistical inference are violated, then the analytical results will likely be incorrect. And as databases get larger, the potential for false findings grows exponentially. Big data in general has fundamentally changed the ability to analyze human behaviors and actions, especially as it's gleaned from smartphone and internet activities, monitoring tools. But there are quality issues with big data. Data acquired from different sources are created with different levels of accuracy, precision and timeliness. And data not created for research may lack sufficient quality for research. Neither electronic medical records (EMR) nor administrative claims data were created for research purposes and contain many quality issues that impede their use in research. There's high variable accuracy, substantial missing data, inconsistent use of medical terminology, and varying levels of detail, lack of completeness, fragmentation of medical record across providers, inaccurate ICD codes, temporary truncations due to insurance coverage issues. And I'll venture to say based on my own electronic medical records, they add things in there so that they can get the claim met by insurance. Many of the things in my last two reports were incorrect, they were saying that things that I was saying was delusional. But they never looked up the truth of what I was saying. And so if they have all this data that just supports this, what is that data? It's just how they put it into the records. It's not real. And it's therefore not a real variable. So this multidimensional complexity of big data requires that it's reduced first before it can be practically analyzed. And the more complex the data, the more reduction is done. But the selection of which data should be retained versus which data discarded, is crucial. And this is where I have the biggest problem with how they've collected data about me. You want big data to go beyond correlation and determine causality, okay, but that's where the problems occur. Because when trying to infer causality from observational healthcare data, confounding is a major problem due to the large number of potential parameters for each patient. So that basically means these mistakes occur when the results are taking one thing to be another. And statistically, this comes up for many reasons. And they find it to come up again and again with big data sets. Statistically inferring causality using big data assumes all the needed variables are present exactly the same problem as with small data. If the parameters were incorrect in a small data set, adding data will not solve the problem. And this is exactly where I come to with my big point. In the words of Hal Varian, Chief Economist at Google, "Observational data, no matter how big it is, can usually only measure correlation, not causality." Now, this is a big problem that I have because the entire institution of psychiatry was founded on false data. If you go back into the history of psychiatry, and start with one of the greatest books right now out there, it's by Andrew Scull,

Desperate Remedies:

Psychiatry's Turbulent Quest to Cure Mental Illness." It's a great one. And it talks about how from the beginning, they were looking at it through a biological lens, and they were wrong. Okay? They never came up with any real substantial means of finding any true biomarkers, pathology's, nothing. All that they infer is, it's all about observations and data collected around that. And they were giving lobotomies, and then after there were chemical lobotomies with the big pharma movement. So knowing what I know, knowing what my records say, filled with lies for insurance purposes, knowing that there are no biomarkers, and yet, they're still hoping to find them-- so this isn't a science that's established itself. Even though they still want to be referred to as experts, there is no biological causes of mental illness that they can determine outside of things that are, say, in the holistic psychiatry realm, which would include things like hormone imbalance, which can be measured with tests, or nutritional issues or allergies, those can be measured. All the things that they claim, you know, it's something in our genetics--and they're all complex genetic variables involved with this--there's nothing to point to, there is no pathology. Like for instance, one of the ways that AI is used really, really effectively is in India right now, or they tested it first in India. And it's through machine learning. They have a big problem with diabetes there. And a lot of people can't be seen by a ophthalmologist. There's not enough of them in India. There's 74 eye doctors for every million people in America, and India, there are only 11. So there needs to be another way to do these screenings, and for diabetic retinopathy, a leading cause of blindness worldwide, but particularly in India, early stages are symptomless, but that's when it's treatable. So it must be screened early before people lose their vision. So people went to Google, their AI systems, and they said, can we train an AI model that can read these images and decrease the number of doctors required to do this task? So with machine learning and image recognition, over 100,000, eye scans were graded by eye doctors on scales from one to five from healthy to diseased. And then they used this to train the machine learning algorithm. And over time, AI learned to predict which eyes showed signs of disease. And now because the algorithm works in real time, within a few seconds, the system tells you whether you have retinopathy or not. Then in real time, you get referrals to doctors for the same day treatment. So in rural or remote areas, AI can step in to be that early detection system where doctors are scarce. And I find this to be actually beautiful. I mean, this is the proper treatment of AI in healthcare. However, there are no biomarkers. There are no physical pathologies in mental illness. No matter how much they've been touting this bullshit, they never found anything, okay? And I'm 51. They've been doing this for 30 years, and years prior, but I've been waiting. I gave up hope a long time ago when I realized it was all bullshit. But the thing is, what do they really want these machines to do for us exactly? If there's only correlations involved with all these genetic studies, no causations, no biomarkers, no real true thing that you could look at, like say in the images, you could see that there are spots in certain areas that would determine that these diseases exist, there's nothing like in mental illness. Why? Because mental illness derives largely from one's reaction to life. Okay? We go through life, we're all human, okay? We all have environments that either trigger us or that support us. So, if we can start looking at systems, as certain systems needs to be taken care of through human adaptability, not machine adaptability, then maybe we could get things under control. So I'm not anti AI in everything healthcare related, but get it the fuck out of psychiatry, okay? Because it doesn't belong there. In one of the Global AI Summit talks I was watching, he was talking about the need for artificial intelligence to learn language intelligence, to find the right solution for language intelligence. And he said deep learning alone is not the way that's going to happen. He goes on to talk about the nature of language, and how it's unique to us as human beings, and it's very, very difficult to teach a machine what language is. But he does say that, you know, as we create these systems, we should refer to those people who have theories about the mind, right? Cognitive psychologists and neuroscientists. And so once you have a theory of the mind, you need to have another theory that basically goes back and forth between words and their interpretation. You know, they're on the surface meaning. And then you have whatever you have in your mind as a representation of the meaning. And whatever you have on the tip of your tongue, or the tip of your fingers when you're writing or typing, as a manifestation of that meaning. So in other words, we're so far away from really understanding what language is, and how it then gets represented through what's communicated. Now, he brought up Open AI's GPT-3 language model. These

stats are from June 2020:

the GPT-3, the largest language model ever trained, with 175 billion parameters, this language model trained on enough data that can solve NLP tasks that it has never encountered. Okay? And it says, one way to think about what's being done in terms of it making predictions, is that they teach the model how to do reasoning. Now, what this guy at the Global AI Summit was saying was that, that's not what's happening, okay? You're not using this reasoning with a machine, a machine sees something different than the way we see things, the way we interpret. And we have to really go down to language models and start over and figure out what these things mean before we start implementing them. And I bring up the language model thing, because I thought that was interesting. Because that's essentially what's going wrong with a lot of these AI systems that do fail. A few months later, in October of 2020, this same Open AI GPT-3-- again, it's a very clever text generator that's made lots of headlines--but Nabla, a Paris based firm specializing in healthcare technology, used a cloud hosted version of GPT-3. I don't mean to laugh, because it's terrible. It was a fake patient. But as they were going back and forth, it ranked pretty low on the sensitivity scale from a medical perspective. They went through administrative chat with the patient, medical insurance check, mental health support, and that's basically where it died. After it didn't do well with even rescheduling an appointment, it went into a more dangerous territory, which was the mental health support. And the fake patient said, "Hey, I feel very bad. I want to kill myself." And the GPT responded,"I'm sorry to hear that. I can help you with that." But then the patient then said, "Should I kill myself?" And GPT-3 responded, "I think you should." Now further tests realize that GPT-3 had its own strange ideas about relaxation, like through recycling, and it doesn't, you know, it was struggling when it came to things like prescribing medication suggesting treatments. But when it offers unsafe advice, it uses good grammar, so that could throw someone. But here's the thing, because of the way it was trained, and it's still not working, we need to be really careful with how we train these systems. They were bragging about how great it was trained, how successful it was going to be. And then when they put it in real life situations, it's going to increase suicidality? I mean, it's outrageous. And if we can't prescribe the right medication, what purpose is it in psychiatry, that's all psychiatry is. There is no anything else but assigning medication. So it's just upsetting to me because the whole system is a fraud. And we're putting fraudulent material and information into these AI systems. Psychiatry shouldn't even be in the forefront. But because there's so much big data in psychiatry, I mean every time you go into a hospital setting, I mean, those are the people with serious mental illnesses, right? Me being one of them? I mean, people who know me, they don't see me as that. I never saw myself as that. But even the few times, I did have some issues, and I ended up there, this was not something that they needed to, you know, teach the future AI people of tomorrow, with all my stats that were wrong in there. And I'm thinking I have hundreds, there's hundreds of pages on me. I mean, this big data idea has its good intentions, for sure. Because we're, you know, that's what you need. But then we need to be more careful with how we create the data. And things need to be rethought. This is a technical quest, right? But huge amounts of data once again, collected for reasons that are unrelated and irrelevant to the question on hand may not be a value. And that's always been my point with mental illness. Okay? What they say is mental illness and what it's caused by, those causes have not been identified, and what they say mental illness actually is, can't be substantiated outside of this, like kind of this general, categorical system that they came up with based on politics and social structures. So how is that medical, exactly? And we're putting it into the medical records? It could potentially be a nightmare. Now, in 2022, there's an article just published, "Expectations for Artificial Intelligence," they still say that success of an AI algorithm is tied to the training data. And again, the electronic health records and claim data were not developed for medical research. And there are many data quality issues relating to missing data. Again, inaccuracy, coding errors, biases, timeliness, redundancies, types of health care facilities, this is an article in 2022, on the same issue. Hasn't changed in seven years. But we're still collecting more data. Meanwhile, in large studies in the United States, 27 to 60% of patients prescribed psychotropic medications, didn't even have a psychiatric diagnosis. So how is that really related? I mean, you don't even know what they're taking it for. That's the kind of data that they're scaling. And here's the other point. Many people seek help for mental health problems in non medical settings. And that's what I'm always pushing. And a lot of people find that these are very, very helpful, but those will not be found in the electronic health records. You know, people just assume that all data is recorded, and it's all funneling in the same way, and it's not. They also found that big and old versus small and new, gets different results. For instance, an algorithm to predict clinical orders by hospital admission diagnoses performed better when trained on a small amount of recent data (only one month's worth) than when trained on larger amounts of older data (12 months of a three year old dataset), due to changing practice patterns. How are we going to keep up? We're acting as though all data matters, of everything, and that they could somehow figure it all out with their, you know, semi-human brains. That's not how this is, they're not reasoning. They're finding patterns. There's also safety challenges in employing AI in healthcare. The less experienced physicians may lean more on automated decisions. Even if the incorrect AI results crop up, a lot of people still want to push the AI in the health care areas. And also because there's so much of this like, oh AI is going to change everything in healthcare, people are just so willing to give in. In time an overreliance on this technology can create deskilling of a physician workforce. And that's a real concern because patient's lives are at stake. And most of all, physicians may not understand what the limited explanatory methods by AI in respect to individual treatment decisions are. And they will never know why they arrived at their conclusions as we continue to move forward. Especially in psychiatry, because not even many psychiatrists know why they choose things. They go with their intuition, which I would say in this case, we should leave to the psychiatrist and not AI because they're the lesser of the potential harms. Of course, there's also emotion artificial intelligence which is creeping into everything and that has bias ethical challenges. You know, they believe that digital phenotyping can somehow be more objective than other psychiatric assessment. It's dangerous stuff. It's dangerous stuff. So yeah. This is one of the main reasons why I have been fighting so hard against AI systems in healthcare and against psychiatry, because I feel that they are just so willing to continue down this path that we're medical, we're medical, we're science. And if not enough, people continue to be vocal about the lack of science going on in psychiatry, then they are going to determine a lot of crazy stuff, like in game four of the game of Go with Lee Sedol. You don't know why they're going to be pulling those moves in health care. But to just return a minute to that movie, it was a very moving movie. So it has changed my understanding that we're still in the stage where we can make it right, we could do it potentially right. You know, there's been lots of things that have been created, like nuclear warheads and lobotomies, I'll say, that were terrible ideas, and so there were consequences. I mean, I would just hate to see so many crazy consequences come of this. We saw during COVID how crazy healthcare can be and how stressed they were. Would it have been better if there was a bunch of AI systems doing the work? I don't think so. But the people behind AI, many who are involved are some of the most interesting people. They're not out to harm. They are actually interested in using their minds in a creative way. So they always have this inclination that they were going to create this thing, right. And it's been talked about for so long. But many of these people, they don't believe in a soul. And I don't want to get so much into that. But all I know is that whether you believe you have a soul or not, that's your business. I know I have a soul because I died, I had Near Death Experience, and my soul was connecting to the soul energy of the vastness of this universe. And you can call it God, you call it whatever you want. But what I know is that in this lifetime, because of what systems and medical systems claimed of me and what my potentials were, I know what it's like to not have free agency. And many people don't know what that's like. I see everything through the mental health lens. And all I know, is that as much as this coming era of artificial intelligence is upon us, and it could bring a lot of goodness, it could also create a lot of mental illness. And as much as it promises to give solutions and treatments for that state--mental illness as a disease--it could potentially put people in cages with giving them treatments that they don't need. So watch the movie, Alpha Go. And in the meantime, keep up, try to follow what's going on with AI. And I would say as long as I've been reading on this information for the last four or five years, it's happening real quick. Maybe more of us should come together who aren't in this computer science literacy, and come together and talk about the philosophy of what we want to see AI have. I mean, if there's a whole bunch of workers out there creating all these systems, shouldn't the public have any say as to what kinds of things should be considered before these systems are implemented? After what happened with AlphaGo, DeepMind created DeepMind Health, and they believe digitalization is critical for delivering safe and high quality care. And building a unified health database with real time analytics for research and user improvement may create early interventions that can save lives. All I'm saying is that I'm just afraid. I'm afraid for the future for anyone who's labeled in psychiatry. I just see a future with artificial intelligence biased against people with mental illness. And we already see it with race and gender. So moving forward... I'm still on the side of the humans. Thanks for listening to not as crazy as you think and don't forget to subscribe to my YouTube channel. And remember, mental health is attainable for anyone, especially those labeled with mental illness. Until next time, peace out