<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	
	xmlns:georss="http://www.georss.org/georss"
	xmlns:geo="http://www.w3.org/2003/01/geo/wgs84_pos#"
	>

<channel>
	<title>AI - Medika Life</title>
	<atom:link href="https://medika.life/tag/ai/feed/" rel="self" type="application/rss+xml" />
	<link>https://medika.life/tag/ai/</link>
	<description>Make Informed decisions about your Health</description>
	<lastBuildDate>Tue, 07 Apr 2026 05:25:21 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.5.5</generator>

 
<site xmlns="com-wordpress:feed-additions:1">180099625</site>	<item>
		<title>AI Will Not Fix Health Care &#8211; Leadership Might</title>
		<link>https://medika.life/ai-will-not-fix-health-care-leadership-might/</link>
		
		<dc:creator><![CDATA[Gil Bashe, Medika Life Editor]]></dc:creator>
		<pubDate>Tue, 07 Apr 2026 05:25:12 +0000</pubDate>
				<category><![CDATA[AI Chat GPT GenAI]]></category>
		<category><![CDATA[Digital Health]]></category>
		<category><![CDATA[Diseases]]></category>
		<category><![CDATA[Editors Choice]]></category>
		<category><![CDATA[Ethics in Practice]]></category>
		<category><![CDATA[For Doctors]]></category>
		<category><![CDATA[General Health]]></category>
		<category><![CDATA[Healthcare Policy and Opinion]]></category>
		<category><![CDATA[Policy and Practice]]></category>
		<category><![CDATA[Public Health]]></category>
		<category><![CDATA[Trending Issues]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[ChatGPT]]></category>
		<category><![CDATA[Clalit Health Services]]></category>
		<category><![CDATA[Gil Bashe]]></category>
		<category><![CDATA[Hal Wolf]]></category>
		<category><![CDATA[Harvard Medical School]]></category>
		<category><![CDATA[HIMSS]]></category>
		<category><![CDATA[Issac Kohane]]></category>
		<category><![CDATA[LLMs]]></category>
		<category><![CDATA[Ran Balicer]]></category>
		<guid isPermaLink="false">https://medika.life/?p=21627</guid>

					<description><![CDATA[<p>There is a moment at the HIMSS Global Health Conference when the conversation shifts. It moves away from what artificial intelligence can do and toward how it is already being used. Not in controlled pilots or planned rollouts, but in real time, by countless clinicians making decisions under pressure. Artificial intelligence is no longer a [&#8230;]</p>
<p>The post <a href="https://medika.life/ai-will-not-fix-health-care-leadership-might/">AI Will Not Fix Health Care &#8211; Leadership Might</a> appeared first on <a href="https://medika.life">Medika Life</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>There is a moment at the <a href="https://www.himss.org/">HIMSS Global Health Conference</a> when the conversation shifts. It moves away from what artificial intelligence can do and toward how it is already being used. Not in controlled pilots or planned rollouts, but in real time, by countless clinicians making decisions under pressure. Artificial intelligence is no longer a future state. It is present, embedded and influencing care before many organizations have fully decided how it should be governed. The industry is not lacking innovation. It is navigating its consequences.</p>



<p>Health systems are not stepping into artificial intelligence from a place of calm or control. In the United States, spending now exceeds $4.5 trillion, with a significant share tied up in administrative work that adds complexity more than clarity. Clinicians are caring for more patients, navigating more data and making more decisions under pressure than ever before. The system is stretched. Artificial intelligence is entering at a moment when change is no longer a choice.</p>



<p>The discussion drew on the experience of three leaders who are not observing this shift. They are guiding it. <a href="https://iowa.himss.org/resource-bio/harold-f-wolf-iii">Hal Wolf</a> leads HIMSS, influencing digital health policy and implementation across more than 100 countries. <a href="https://dbmi.hms.harvard.edu/people/isaac-kohane">Isaac Kohane, MD, PhD, Chair of Biomedical Informatics at Harvard Medical School</a>, has spent four decades defining how data informs clinical care. <a href="https://en.wikipedia.org/wiki/Ran_Balicer">Ran Balicer, MD, Chief Innovation Officer at Clalit Health Services</a>, operates within one of the world’s most integrated health systems, where data and care are aligned across generations.</p>



<p>These are not just star panelists. They are system-wide architects.  What emerged from the hour-long conversation was not what artificial intelligence can do. It was a recognition that it is already doing more than most systems are prepared to guide and govern.</p>



<figure class="wp-block-image size-large"><img fetchpriority="high" decoding="async" width="696" height="445" src="https://i0.wp.com/medika.life/wp-content/uploads/2026/04/Issac-1.png?resize=696%2C445&#038;ssl=1" alt="" class="wp-image-21628" srcset="https://i0.wp.com/medika.life/wp-content/uploads/2026/04/Issac-1.png?resize=1024%2C654&amp;ssl=1 1024w, https://i0.wp.com/medika.life/wp-content/uploads/2026/04/Issac-1.png?resize=300%2C192&amp;ssl=1 300w, https://i0.wp.com/medika.life/wp-content/uploads/2026/04/Issac-1.png?resize=768%2C490&amp;ssl=1 768w, https://i0.wp.com/medika.life/wp-content/uploads/2026/04/Issac-1.png?resize=1536%2C981&amp;ssl=1 1536w, https://i0.wp.com/medika.life/wp-content/uploads/2026/04/Issac-1.png?resize=2048%2C1308&amp;ssl=1 2048w, https://i0.wp.com/medika.life/wp-content/uploads/2026/04/Issac-1.png?resize=150%2C96&amp;ssl=1 150w, https://i0.wp.com/medika.life/wp-content/uploads/2026/04/Issac-1.png?resize=696%2C444&amp;ssl=1 696w, https://i0.wp.com/medika.life/wp-content/uploads/2026/04/Issac-1.png?resize=1068%2C682&amp;ssl=1 1068w, https://i0.wp.com/medika.life/wp-content/uploads/2026/04/Issac-1.png?resize=1920%2C1226&amp;ssl=1 1920w, https://i0.wp.com/medika.life/wp-content/uploads/2026/04/Issac-1.png?w=1392&amp;ssl=1 1392w" sizes="(max-width: 696px) 100vw, 696px" data-recalc-dims="1" /><figcaption class="wp-element-caption">Photo Credit: HIMSS: Isaac Kohane, PhD, MD, Chair of Biomedical Informatics at Harvard Medical School, shares insights from the mainstage of HIMSS</figcaption></figure>



<p>Dr. Kohane captured the tension immediately. <em>“I think that we have to worry about the fact that we’re going both too slow and too fast.”</em></p>



<p>That statement reflects a reality many leaders feel but rarely express. Governance takes time because it must. Patient safety, validation and accountability require structure. Practice moves in real time. Clinicians do not have the luxury of waiting for perfect systems.</p>



<p><em>“They’re so desperate to do right by their patients to use other resources,”</em> Dr. Kohane adds.</p>



<p>That instinct is not a weakness. It reflects a commitment to doing what is right for the patient. When clinicians turn to external AI tools, they are seeking clarity, speed, and confidence in their decisions. Artificial intelligence is already present at the point of care, shaping how physicians assess information, validate thinking, and move forward. The system is not adopting AI. The system is catching up.</p>



<p>This creates a condition that is difficult to measure and even harder to manage. Different clinicians use different ChatGPT platforms. Those tools produce different answers. Different assumptions shape those answers. Over time, consistency erodes. The system begins to operate with multiple definitions of truth (and the risk of varied outcomes).</p>



<p>Dr. Kohane’s warning is not about misuse. It is about misguided permanence. <em>“The worst outcome will be if the worst parts of medicine get concrete poured over it, by AI.”</em></p>



<p>Artificial intelligence does not fix a system; without leadership, it accelerates the integration of incorrect assumptions. If workflows are inefficient, they become more efficiently inefficient. If bias exists in data, it becomes more precise. If fragmentation defines care, it scales.</p>



<h2 class="wp-block-heading"><strong>This is not a failure of technology. It is a mirror held up to system-wide leadership.</strong></h2>



<p>Hal Wolf, among the health sector’s leading policy and operational voices, grounded this moment in proven experience. Health care has seen this pattern before. When internet connectivity entered hospitals, clinicians moved faster than governance. They created access where it was needed. Systems responded later. Risks were discovered after adoption.</p>



<figure class="wp-block-image size-large is-resized"><img decoding="async" width="696" height="575" src="https://i0.wp.com/medika.life/wp-content/uploads/2026/04/Hal-Wolf-2.png?resize=696%2C575&#038;ssl=1" alt="" class="wp-image-21629" style="width:871px;height:auto" srcset="https://i0.wp.com/medika.life/wp-content/uploads/2026/04/Hal-Wolf-2.png?resize=1024%2C846&amp;ssl=1 1024w, https://i0.wp.com/medika.life/wp-content/uploads/2026/04/Hal-Wolf-2.png?resize=300%2C248&amp;ssl=1 300w, https://i0.wp.com/medika.life/wp-content/uploads/2026/04/Hal-Wolf-2.png?resize=768%2C634&amp;ssl=1 768w, https://i0.wp.com/medika.life/wp-content/uploads/2026/04/Hal-Wolf-2.png?resize=1536%2C1269&amp;ssl=1 1536w, https://i0.wp.com/medika.life/wp-content/uploads/2026/04/Hal-Wolf-2.png?resize=2048%2C1692&amp;ssl=1 2048w, https://i0.wp.com/medika.life/wp-content/uploads/2026/04/Hal-Wolf-2.png?resize=150%2C124&amp;ssl=1 150w, https://i0.wp.com/medika.life/wp-content/uploads/2026/04/Hal-Wolf-2.png?resize=696%2C575&amp;ssl=1 696w, https://i0.wp.com/medika.life/wp-content/uploads/2026/04/Hal-Wolf-2.png?resize=1068%2C882&amp;ssl=1 1068w, https://i0.wp.com/medika.life/wp-content/uploads/2026/04/Hal-Wolf-2.png?resize=1920%2C1586&amp;ssl=1 1920w, https://i0.wp.com/medika.life/wp-content/uploads/2026/04/Hal-Wolf-2.png?w=1392&amp;ssl=1 1392w" sizes="(max-width: 696px) 100vw, 696px" data-recalc-dims="1" /><figcaption class="wp-element-caption">Photo Credit: HIMSS &#8211; Hal Wolf, President and CEO, HIMSS, on the mainstage conversation on &#8220;Recognizing the Value Proposition” Criteria While Selecting AI Applications&#8221; with Drs. Kohane and Balicer.</figcaption></figure>



<p>Artificial intelligence now follows that same trajectory, though at far greater speed and with far greater consequences. Web connectivity gave quick access to information. Artificial intelligence influences how that information is interpreted and acted upon.</p>



<p><em>“We have to go faster,”</em> Mr. Wolf said<em>. “But there needs to be structure around it.”</em></p>



<p>That is the leadership challenge of this moment. Speed without structure creates exposure. Structure without speed creates irrelevance. The tension between the two is not something to resolve. It is something to manage continuously.</p>



<p>The industry has predictably responded to artificial intelligence. It has started where risk is lowest and return is clearest. Documentation, scheduling and revenue cycle optimization have become the entry points. These applications reduce burden and improve efficiency. They are necessary. However, they are not transformational.</p>



<p>The shift occurs when artificial intelligence moves into clinical decision-making. At that point, the question is no longer whether the system works. The question becomes whether it should be trusted.</p>



<p>Who owns a decision informed by an algorithm? How is accuracy validated? What happens when a clinician disagrees with a recommendation? These are not technical questions. They are questions of accountability. Artificial intelligence does not assume responsibility. It does not carry consequence. That remains with leadership.</p>



<p>Dr. Balicer reframed the conversation, shifting how the room thought about artificial intelligence. <em>“There’s no such thing as AI neutrality. Algorithms are just opinions embedded in code.”</em></p>



<figure class="wp-block-image size-full"><img decoding="async" width="696" height="523" src="https://i0.wp.com/medika.life/wp-content/uploads/2026/04/HkPtQ7MB11g_0_171_2000_1501_0_x-large.jpg?resize=696%2C523&#038;ssl=1" alt="" class="wp-image-21630" srcset="https://i0.wp.com/medika.life/wp-content/uploads/2026/04/HkPtQ7MB11g_0_171_2000_1501_0_x-large.jpg?w=1024&amp;ssl=1 1024w, https://i0.wp.com/medika.life/wp-content/uploads/2026/04/HkPtQ7MB11g_0_171_2000_1501_0_x-large.jpg?resize=300%2C225&amp;ssl=1 300w, https://i0.wp.com/medika.life/wp-content/uploads/2026/04/HkPtQ7MB11g_0_171_2000_1501_0_x-large.jpg?resize=768%2C577&amp;ssl=1 768w, https://i0.wp.com/medika.life/wp-content/uploads/2026/04/HkPtQ7MB11g_0_171_2000_1501_0_x-large.jpg?resize=150%2C113&amp;ssl=1 150w, https://i0.wp.com/medika.life/wp-content/uploads/2026/04/HkPtQ7MB11g_0_171_2000_1501_0_x-large.jpg?resize=696%2C523&amp;ssl=1 696w" sizes="(max-width: 696px) 100vw, 696px" data-recalc-dims="1" /><figcaption class="wp-element-caption">Photo Credit: CTECH &#8211; Ran Balicer, MD, Chief Innovation Officer at Clalit Health Services.</figcaption></figure>



<p>That insight is easy to acknowledge and difficult to operationalize. Every model reflects choices. What data is included? What outcomes are prioritized? What trade-offs are accepted? Those decisions are embedded in the system, shaping how it interprets information.</p>



<p>When a health system adopts an AI tool, it is not simply implementing technology. It is adopting a perspective.</p>



<p>At Clalit Health Services, alignment across payer and provider creates a system where priorities are consistent. Even there, external AI models introduce new assumptions. Those assumptions may not align with the system’s goals. If leadership does not define its own values, it inherits someone else’s.</p>



<p>This becomes real in proactive care. Artificial intelligence enables systems to identify patients at risk before they present. It allows for earlier intervention, often improving outcomes.</p>



<p>It also creates a new kind of pressure. <em>“The toughest choice is what not to do,”</em> Dr. Balicer said.</p>



<p>That statement deserves more attention than it receives. Health care has been built around responding to need. Artificial intelligence introduces the ability to anticipate it. When every patient can be flagged, every risk predicted and every intervention suggested, the system is no longer constrained by insight. It is constrained by capacity.</p>



<p>Artificial intelligence expands what can be done. It does not expand who can do it. Leadership becomes the act of choosing who does what based on validated data.</p>



<p>There is a moment that captures this shift. Imagine a primary care physician starting the day not with a schedule of patients who have called for appointments, but with a list generated by AI identifying individuals who are likely to experience clinical complications in the next six months. Some will develop chronic conditions. Some will require hospitalization. Some can be helped now – preventively.</p>



<h2 class="wp-block-heading">The physician cannot see them all. Artificial intelligence expands what is possible. Leadership decides what is essential and permissible.</h2>



<p>The industry often responds to complexity with activity. Organizations pilot, test and explore. They engage broadly without committing deeply. This creates motion. It rarely creates progress. Pilots are nothing more than experiments. At some point, leadership must decide what to scale, what to stop and what defines value.</p>



<p>Hal Wolf grounded the conversation in discipline. Without a defined, shared objective, effort becomes noise. Pilots create learning, though they often avoid decision-making. Leadership requires clarity. What problem are we solving? What outcome defines success? What are we willing to prioritize? Without those answers, artificial intelligence adds another layer of complexity to an already complex system.</p>



<p>Dr. Kohane brought the conversation back to the discipline of leadership. It cannot remain abstract. It must be informed by experience.</p>



<p><em>“Go and pay a few bucks and use three or four of the models… get a feel for what this does,” Dr. Kohane advised.</em></p>



<p>That is not a call for technical fluency. It is a call for leadership proximity. Leaders cannot guide what they do not understand. Artificial intelligence does not behave consistently across models. It produces different answers, shaped by different assumptions. Without direct engagement, those differences remain hidden, and leadership becomes removed from the very decisions it is responsible for guiding.</p>



<p>This is where many organizations hesitate. Artificial intelligence feels complex and complexity invites delegation. At this moment, delegation creates distance. Leadership is required to move closer, not further away.</p>



<h2 class="wp-block-heading"><strong>Artificial intelligence is not reducing the role of leadership. It is redefining it.</strong></h2>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="696" height="536" src="https://i0.wp.com/medika.life/wp-content/uploads/2026/04/Gil-Bashe-1.png?resize=696%2C536&#038;ssl=1" alt="" class="wp-image-21631" srcset="https://i0.wp.com/medika.life/wp-content/uploads/2026/04/Gil-Bashe-1.png?resize=1024%2C789&amp;ssl=1 1024w, https://i0.wp.com/medika.life/wp-content/uploads/2026/04/Gil-Bashe-1.png?resize=300%2C231&amp;ssl=1 300w, https://i0.wp.com/medika.life/wp-content/uploads/2026/04/Gil-Bashe-1.png?resize=768%2C591&amp;ssl=1 768w, https://i0.wp.com/medika.life/wp-content/uploads/2026/04/Gil-Bashe-1.png?resize=1536%2C1183&amp;ssl=1 1536w, https://i0.wp.com/medika.life/wp-content/uploads/2026/04/Gil-Bashe-1.png?resize=2048%2C1577&amp;ssl=1 2048w, https://i0.wp.com/medika.life/wp-content/uploads/2026/04/Gil-Bashe-1.png?resize=150%2C116&amp;ssl=1 150w, https://i0.wp.com/medika.life/wp-content/uploads/2026/04/Gil-Bashe-1.png?resize=696%2C536&amp;ssl=1 696w, https://i0.wp.com/medika.life/wp-content/uploads/2026/04/Gil-Bashe-1.png?resize=1068%2C822&amp;ssl=1 1068w, https://i0.wp.com/medika.life/wp-content/uploads/2026/04/Gil-Bashe-1.png?resize=1920%2C1479&amp;ssl=1 1920w, https://i0.wp.com/medika.life/wp-content/uploads/2026/04/Gil-Bashe-1.png?w=1392&amp;ssl=1 1392w" sizes="(max-width: 696px) 100vw, 696px" data-recalc-dims="1" /><figcaption class="wp-element-caption">Phot Credit: HIMSS &#8211; Gil Bashe, Chair Global Health and Purpose, FINN Partners and Editor-in-Chief, Media Life at HIMSS moderating the mainstage session &#8220;Recognizing the Value Proposition” Criteria While Selecting AI Applications.&#8221;</figcaption></figure>



<p>This is not a gradual transition. It is already underway. Artificial intelligence is embedded in workflows, shaping decisions and influencing behavior in real time. The system is adapting whether leadership is ready or not.</p>



<p>The question is no longer whether artificial intelligence will shape the future of health. It will. The question is whether leadership will shape how it is applied.</p>



<p>Artificial intelligence will not fix health. It will scale whatever we allow it to touch. The question is whether it will scale what is best in health or what we have yet to fix.</p>
<p>The post <a href="https://medika.life/ai-will-not-fix-health-care-leadership-might/">AI Will Not Fix Health Care &#8211; Leadership Might</a> appeared first on <a href="https://medika.life">Medika Life</a>.</p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">21627</post-id>	</item>
		<item>
		<title>The Shift from Pure Modernity to Human-Centered Modernity</title>
		<link>https://medika.life/the-shift-from-pure-modernity-to-human-centered-modernity/</link>
		
		<dc:creator><![CDATA[Atefeh Ferdosipour]]></dc:creator>
		<pubDate>Fri, 06 Mar 2026 19:52:14 +0000</pubDate>
				<category><![CDATA[AI Chat GPT GenAI]]></category>
		<category><![CDATA[Editors Choice]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[Atefeh Ferdosipour]]></category>
		<category><![CDATA[education]]></category>
		<category><![CDATA[Human-Centered Artificial Intelligence]]></category>
		<category><![CDATA[Learning Sciences]]></category>
		<category><![CDATA[LLMs]]></category>
		<guid isPermaLink="false">https://medika.life/?p=21613</guid>

					<description><![CDATA[<p>Throughout the history of science, it has rarely been the case that any phenomenon has remained permanent and unchanging. Theories, approaches, research methods, philosophies, and everything related to scientific perspectives have continually evolved. These changes have been adaptive and have moved toward improving human living conditions. If science is meant to serve humanity, it follows [&#8230;]</p>
<p>The post <a href="https://medika.life/the-shift-from-pure-modernity-to-human-centered-modernity/">The Shift from Pure Modernity to Human-Centered Modernity</a> appeared first on <a href="https://medika.life">Medika Life</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Throughout the history of science, it has rarely been the case that any phenomenon has remained permanent and unchanging. Theories, approaches, research methods, philosophies, and everything related to scientific perspectives have continually evolved. These changes have been adaptive and have moved toward improving human living conditions. If science is meant to serve humanity, it follows that whenever a tool fails—for whatever reason—to fulfill this responsibility effectively, it must either change or, over time and under changing circumstances, be updated into a more efficient version.</p>



<p>But from the perspective of philosophers of science, when do such shifts in scientific approaches actually occur?</p>



<h2 class="wp-block-heading"><em><strong>Thomas Kuhn’s Perspective</strong></em></h2>



<p>Kuhn believed that changes in scientific approaches resemble political revolutions. Simply put, when a government can no longer manage society or effectively administer its affairs, dissatisfaction gradually spreads among the public and opposition begins to form. In other words, the inability to respond to society’s needs becomes the driving force behind revolutionary movements. This process continues until a capable system emerges that can meet those needs, eventually leading to the establishment of a new order.</p>



<p>A similar process occurs in what Kuhn calls scientific revolutions. According to him, in every era the majority of scientists accept and follow a general framework. Kuhn refers to this dominant framework — which contains a collection of theories and practical models — as a paradigm. Paradigms are patterns widely followed by scholars, such as the paradigm of modernity or the paradigm of cognitive science.</p>



<p>As long as these paradigms remain aligned with the requirements of life and are capable of addressing existing problems, they continue to be valued and are used in major policy frameworks. However, when a dominant paradigm fails to respond to contemporary challenges and the solutions derived from it prove ineffective at addressing large-scale needs, doubts arise about its continued relevance. Under such circumstances, dissatisfaction intensifies to the point that scholars begin to consider laying the groundwork for a new, updated paradigm.</p>



<p>In his book The Structure of Scientific Revolutions, Kuhn emphasizes that scientific transformations are not linear or step-by-step processes. Rather, they are complex and revolutionary developments in which social and historical factors play a crucial role. Under normal conditions, scientists operate within the framework of an accepted paradigm — what Kuhn calls normal science. However, when persistent anomalies emerge and the paradigm proves incapable of addressing them, the existing structure eventually collapses and a scientific revolution occurs.</p>



<h2 class="wp-block-heading"><em><strong>Karl Popper’s Theory of Science</strong></em></h2>



<p>Like many philosophers of science, Popper believed that change is not only inevitable but also a necessity. The Popperian view rests on the principle of falsifiability. In this framework, science begins with a problem, and solving a problem means finding solutions to existing challenges. As long as a scientific theory remains open to criticism and falsification, it retains the capacity to address and solve problems.</p>



<p>In Popper’s view, bold conjectures do not weaken science; rather, they strengthen it. Solutions proposed under the principle of falsifiability help correct previous errors, and this is precisely where the strength of the scientific approach lies. If existing approaches are not falsifiable, they lose the possibility of logical trial and error and are therefore considered weak. In such cases, the need for a shift in approach and the introduction of new models becomes evident.</p>



<p>Popper believed that learning is essentially problem-solving guided by the principle of falsifiability.</p>



<p>To move beyond temporary and ineffective solutions, followers of science must avoid false certainties, accept falsification, and search for effective alternatives.</p>



<h2 class="wp-block-heading"><strong><em>The Need to Shift from Data-Driven AI to Learning-Science-Based AI</em></strong><em></em></h2>



<p>Today, numerous criticisms are directed at the purely computational and mechanical approach to artificial intelligence. In constructive critiques, the goal is not to deny the existence of large language models; rather, the central question concerns <strong>how</strong> and <strong>under what conditions</strong> they should be used. There is a growing consensus that the closer artificial intelligence moves toward the <strong>essence of human cognition</strong>, the lower its potential risks become.</p>



<p>In recent years, I have repeatedly emphasized that human theories and perspectives must be reexamined through a technological and contemporary lens so that the nature of the human mind is properly reflected in technologies that themselves were modeled after it.&nbsp;</p>



<p>My focus lies on deep theories of learning <strong>(including cognitive approaches, neuroscience, behaviorism, evolutionary perspectives, structuralism, and other related frameworks).</strong></p>



<p>In this direction, the following steps appear essential:</p>



<p><strong>1. </strong><em>Integrating human and computational perspectives</em><em></em></p>



<p>The current approach, which relies excessively on <strong>probability laws</strong> in large language models, must be integrated with psychological perspectives. A reasonable solution is to pursue interdisciplinary studies and systematic research in this area.</p>



<p><strong>2. </strong><em>Revisiting theories of the learning sciences</em><em></em></p>



<p>Theories that analyze the human mind and behavior should be reassessed by specialists, and their practical dimensions should be extracted for application in advanced technologies.</p>



<p><strong>3. </strong><em>Developing integrative (hybrid) approaches</em><em></em></p>



<p>Experts should develop comprehensive perspectives on learning derived from multiple scientific approaches so that, based on research rather than mere speculation, practical recommendations can be provided to designers and engineers.</p>



<p>In general, the time has come to move beyond a purely logical and mathematical approach toward a <strong>human-centered perspective</strong>. To address the concerns and challenges surrounding artificial intelligence, we must return to systematic and interdisciplinary research.</p>



<p>The era of relying on personal opinions without a research foundation — or on mathematical rules alone — has come to an end. Now is the time to revisit the <strong>learning sciences</strong> from a new perspective in order to realize truly <strong>human-centered artificial intelligence</strong></p>



<h2 class="wp-block-heading"><strong>Author’s Note:</strong></h2>



<p>The ideas presented in this article are part of a broader research project. I am currently working on a comprehensive book on a new approach to human-centered artificial intelligence with a strong emphasis on the learning sciences. While a detailed and systematic discussion of these concepts is presented in Chapter Two, the book also includes a dedicated chapter introducing the new paradigm&#8217;s framework. Furthermore, at least one chapter is specifically focused on the practical methods and applied implications of this approach for implementation in artificial intelligence systems.</p>



<p><em>References</em></p>



<p>• Kuhn, T. S. (1962). The Structure of Scientific Revolutions. University of Chicago Press.</p>



<p>• Popper, K. (1959). The Logic of Scientific Discovery. Hutchinson.</p>



<p>• Popper, K. (1963). Conjectures and Refutations: The Growth of Scientific Knowledge. Routledge.</p>
<p>The post <a href="https://medika.life/the-shift-from-pure-modernity-to-human-centered-modernity/">The Shift from Pure Modernity to Human-Centered Modernity</a> appeared first on <a href="https://medika.life">Medika Life</a>.</p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">21613</post-id>	</item>
		<item>
		<title>Is Your LLM Mentor Human Enough?</title>
		<link>https://medika.life/is-your-llm-mentor-human-enough/</link>
		
		<dc:creator><![CDATA[Atefeh Ferdosipour]]></dc:creator>
		<pubDate>Sun, 15 Feb 2026 01:15:30 +0000</pubDate>
				<category><![CDATA[AI Chat GPT GenAI]]></category>
		<category><![CDATA[Digital Health]]></category>
		<category><![CDATA[Editors Choice]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Atefeh Ferdosipour]]></category>
		<category><![CDATA[Biology]]></category>
		<category><![CDATA[education]]></category>
		<category><![CDATA[LLMs]]></category>
		<category><![CDATA[Mentors]]></category>
		<category><![CDATA[Neurons]]></category>
		<guid isPermaLink="false">https://medika.life/?p=21601</guid>

					<description><![CDATA[<p>In every professional and personal sphere—be it business, medicine, engineering, or parenting—we inherently need a mentor. However, we don&#8217;t need a mentor who simply validates us; we need one who scaffolds our progress step-by-step. A true mentor is one whose stance doesn&#8217;t shift instantly with our every response. Despite being flexible and open to different [&#8230;]</p>
<p>The post <a href="https://medika.life/is-your-llm-mentor-human-enough/">Is Your LLM Mentor Human Enough?</a> appeared first on <a href="https://medika.life">Medika Life</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>In every professional and personal sphere—be it business, medicine, engineering, or parenting—we inherently need a mentor. However, we don&#8217;t need a mentor who simply validates us; we need one who scaffolds our progress step-by-step. A true mentor is one whose stance doesn&#8217;t shift instantly with our every response. Despite being flexible and open to different perspectives, they do not easily abandon their position based solely on our feedback.&nbsp;</p>



<p>Mentorship is, at its core, an educational role, and it must therefore operate on established pedagogical principles. The emergence of any new technology can reshape both concepts and practices. </p>



<p>One of the most profoundly impacted areas over the last two years is &#8220;Education.&#8221; In the era of Artificial Intelligence and the race to deploy Large Language Models (LLMs), educational systems have felt the greatest impact. As global giants compete for AI investment, educational institutions are equally racing to research the qualitative and quantitative use of AI.&nbsp;</p>



<p>Central to this is the concept of &#8220;Mentoring and Mentorship.&#8221; As the name suggests, it refers to guiding the flow of thought and performance of a human user.&nbsp;</p>



<p>Since this process involves providing specialized knowledge to achieve a specific result, we can say a mentor is akin to a &#8220;teacher&#8221; in a formal classroom, and mentoring is fundamentally an educational concept.</p>



<h2 class="wp-block-heading"><strong><em>Redefining Mentorship in the Age of LLMs</em></strong></h2>



<p>Both the term and the practice of mentorship have been transformed by LLMs like GPT and Gemini. Yet, despite the ease they offer, this shift is open to critique and raises significant concerns.&nbsp;</p>



<p>Choosing an AI mentor is far more difficult than choosing a human one, because an AI is an ultra-fast intelligent machine lacking experiential history, focused instead on ultra-heavy data processing.&nbsp;</p>



<p>Among the hundreds of apps recommended daily, three giants claim this path:</p>



<p>• Gemini 3 Pro: The &#8220;Analytical and Realistic&#8221; mentor. Accesses live data and all your personal files.</p>



<p>• ChatGPT 5.2: The &#8220;Strategic and Methodological&#8221; mentor. Provides a framework for your mental chaos.</p>



<p>• Claude 4.5: The &#8220;Literary and Considerate&#8221; mentor. Focused on human-like tone and output quality.</p>



<p>According to February 2026 statistics (LMSYS Arena &amp; Artificial Analysis), ChatGPT 5.2 leads in reasoning intelligence, while Gemini 3 Pro excels in memory and processing speed.&nbsp;</p>



<p>However, in mentorship, quantitative superiority is not the whole story. While Gemini is touted as analytical and exploratory, I believe further investigation is needed:&nbsp;</p>



<p>1- Which model analyzes, and on what topics?&nbsp;</p>



<p>2-Quantitative and mathematical? Qualitative and characteristic? In what context?&nbsp;</p>



<p>3- Similarly, if ChatGPT is &#8220;strategic,&#8221; can logic truly be separated from data critique? Is &#8220;strategizing&#8221; not dependent on one&#8217;s unique mental background? And what, exactly, does a &#8220;considerate writer&#8221; mean in this context?</p>



<h2 class="wp-block-heading"><strong><em>Scaffolding: Human Mentoring vs. Large Language Models</em></strong></h2>



<p>Let us compare the two. The most striking feature of a human mentor is their experiential background and their specific perception of that experience—which includes an interpretation and an emotional component.&nbsp;</p>



<p>A human mentor provides an empirical direction shaped by cognitive and emotional dimensions alongside their knowledge.&nbsp;</p>



<p>Conversely, an LLM is a data repository pulling from websites in real-time. It lacks lived experience and cannot integrate intuition or &#8220;gut feeling&#8221; into a decision-making system.&nbsp;</p>



<p>While AI excels at helping with &#8220;brainstorming&#8221; by providing a vast range of references instantly, it suffers from a fundamental flaw: the absence of personal perception and the emotional weight that is vital in mentoring.</p>



<p>Furthermore, the stages of guidance differ. Human mentoring is a gradual, step-by-step flow. A human mentor assesses your capacity and scaffolds you accordingly. In contrast, with GPT or Gemini, there is no &#8220;scaffold.&#8221; Education is not incremental, and there is no cognitive challenge.</p>



<p>The model provides a massive amount of information in one or two steps. The user is pleased with the instant result, but a &#8220;missing link&#8221; remains: the user becomes perpetually dependent on the AI. They cannot independently solve subsequent challenges because they never underwent the necessary experiential and cognitive stages.</p>



<h2 class="wp-block-heading"><strong>A<em> Biological Analysis</em></strong><strong><em></em></strong></h2>



<p>Biologically, learning and acquisition are based on protein exchange at the neural level. This occurs when an organism encounters challenging and unknown subjects.&nbsp;</p>



<p>According to the laws of evolution, the brain automatically triggers biochemical reactions to resolve these challenges, ultimately leading to &#8220;Learning&#8221; and &#8220;Adaptation.&#8221;</p>



<p>When a human mentor gradually confronts a user with their errors and potential consequences, they provide the necessary neurobiological challenge.&nbsp;</p>



<p>This scaffolding is exactly what an evolved brain requires for &#8220;Deep Learning&#8221; to occur. However, when dealing with a &#8220;Digital Mentor,&#8221; this cognitive elasticity disappears. The process of &#8220;Cognitive Trial and Error&#8221; is compressed into a high-speed instant.&nbsp;</p>



<p>The digital mentor dictates, and the user merely mimics and obeys. This pattern does not align with our biological necessity. Therefore, this process cannot be considered natural mentoring; it is merely &#8220;Modeling.&#8221;</p>



<h2 class="wp-block-heading"><em><strong>Conclusion and Critical Perspective</strong></em></h2>



<p>In recent years, the surge of trend-driven discourse surrounding education and Artificial Intelligence has led to the analysis and judgment of fundamental pedagogical concepts without sufficient theoretical or empirical backing. </p>



<p>The oversimplification of concepts such as Mentoring, Scaffolding, and Large Language Models (LLMs) risks reducing them to mere buzzwords—widely used yet hollow. Therefore, it is essential that this movement be examined by specialists grounded in scientific evidence and core educational principles, ensuring that superficial, word-centric views are replaced by rigorous, research-based analysis.</p>



<p>In this article, mentoring was addressed as a dependent subset of Education—a concept that, whether in formal settings like schools and universities or in informal domains such as personal life, healthcare, industry, and business, remains rooted in the profound foundations of the learning process. Furthermore, the relationship between scaffolding, mentoring, and LLMs was scrutinized.</p>



<p>Based on the arguments presented, the primary challenge is not the necessity of digital mentors, but rather that these mentors are currently simulated versions, not complete replacements for human mentors. In this regard, the following questions demand serious investigation and review:</p>



<p>• Can development companies scientifically bridge the gaps identified in this article?</p>



<p>• Is it possible to integrate a form of experiential history, historical memory, and emotional/perceptual dimensions into digital mentors to truly impact a user’s deep learning process?</p>



<p>• Can they activate the biochemical mechanisms and cognitive friction necessary for deep learning and adaptation to new situations within the user-system interaction?</p>



<p>• How deep and operational is these companies&#8217; understanding of Scaffolding, and can they genuinely integrate it into innovative design?</p>



<p>If a precise understanding of these gaps and challenges is formed, the digital mentors developed by tech giants could evolve beyond passive information packages. By leaning on the Sciences of Learning, they could redesign the process of educational guidance into one that is both challenging and incremental.</p>



<p>The core issue is not the necessity or lack thereof of the digital mentor; the issue is whether it can recreate the challenge, the experience, and the gradual process of learning, or if it will simply replace growth with speed.</p>



<h2 class="wp-block-heading"><em><strong>References</strong></em></h2>



<p>1. Primary AI Benchmarks (2026):</p>



<p>•LMSYS Chatbot Arena (The industry-standard for human-preference and helpfulness ranking).</p>



<p>2.MMLU-Pro (The leading benchmark for advanced reasoning and multi-step logic).</p>



<p>3.Gemini Technical Reports 2026 (Official performance metrics for real-time data latency and multimodal accuracy).</p>



<p>2. Specialized Publications by the Author:</p>



<p>• Ferdosipour, A. (2026). Choosing an AI Mentor That Challenges Your Mind: My Statistics.</p>



<p><a href="https://www.linkedin.com/pulse/choosing-ai-mentor-challenges-your-mind-my-statistics-ferdosipour-y0g2f?utm_source=share&amp;utm_medium=member_ios&amp;utm_campaign=share_via">https://www.linkedin.com/pulse/choosing-ai-mentor-challenges-your-mind-my-statistics-ferdosipour-y0g2f?utm_source=share&amp;utm_medium=member_ios&amp;utm_campaign=share_via</a></p>



<p>• Medika Life (2025/2026). What 2025 Taught Us and What 2026 Will Demand.</p>



<p>• Medika Life (2026). Why Biological Learning Demands the Friction We Seek to Delete.</p>
<p>The post <a href="https://medika.life/is-your-llm-mentor-human-enough/">Is Your LLM Mentor Human Enough?</a> appeared first on <a href="https://medika.life">Medika Life</a>.</p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">21601</post-id>	</item>
		<item>
		<title>Who Will Direct Patient Care: Physicians or Technocrats?</title>
		<link>https://medika.life/who-will-direct-patient-care-physicians-or-technocrats/</link>
		
		<dc:creator><![CDATA[Gil Bashe, Medika Life Editor]]></dc:creator>
		<pubDate>Mon, 09 Feb 2026 15:07:29 +0000</pubDate>
				<category><![CDATA[AI Chat GPT GenAI]]></category>
		<category><![CDATA[Digital Health]]></category>
		<category><![CDATA[Diseases]]></category>
		<category><![CDATA[Editors Choice]]></category>
		<category><![CDATA[Ethics in Practice]]></category>
		<category><![CDATA[General Health]]></category>
		<category><![CDATA[Policy and Practice]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[American Medical Asssociation]]></category>
		<category><![CDATA[ChatGPT]]></category>
		<category><![CDATA[Danny Sands]]></category>
		<category><![CDATA[Healing the Sick Care System: Why People Matter]]></category>
		<category><![CDATA[Humata Health]]></category>
		<category><![CDATA[John Nosta]]></category>
		<category><![CDATA[John Whyte]]></category>
		<category><![CDATA[Optum]]></category>
		<category><![CDATA[Society for Participatory Medicine]]></category>
		<category><![CDATA[Technologies]]></category>
		<guid isPermaLink="false">https://medika.life/?p=21571</guid>

					<description><![CDATA[<p>Not long ago, a physician’s most powerful instrument was not a machine, an algorithm, or a digital platform. It was presence. Listening with intention. Judgment shaped by experience and compassion. Today, as medicine is being reshaped by artificial intelligence, predictive analytics and digital systems, technologies are advancing at remarkable speed. These innovations promise earlier diagnosis, [&#8230;]</p>
<p>The post <a href="https://medika.life/who-will-direct-patient-care-physicians-or-technocrats/">Who Will Direct Patient Care: Physicians or Technocrats?</a> appeared first on <a href="https://medika.life">Medika Life</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Not long ago, a physician’s most powerful instrument was not a machine, an algorithm, or a digital platform. It was presence. Listening with intention. Judgment shaped by experience and compassion. Today, as medicine is being reshaped by artificial intelligence, predictive analytics and digital systems, technologies are advancing at remarkable speed.</p>



<p>These innovations promise earlier diagnosis, greater precision and improved efficiency by augmenting the knowledge and insight that health professionals develop through years of care. Yet beneath this progress lies a more difficult question. Will we use technology to strengthen the physician–patient relationship, or allow it to redefine the nature of care?</p>



<p>As written in <em><a href="https://a.co/d/04ILhkhW">Healing the Sick Care System: Why People Matter</a></em>, “…the system is not broken because it lacks innovation, talent, or investment, but because it has lost sight of the people it exists to serve.” Technology is not the epicenter of care. It is meant to support communication, deepen relationships, and strengthen the human bond at the center of medicine.</p>



<p>Yet as artificial intelligence becomes embedded in diagnostics, decision support, documentation, reimbursement and care navigation, extraordinary clinical potential is accompanied by a growing tension.</p>



<h2 class="wp-block-heading"><strong>Two Encounters, One Technology</strong></h2>



<p>For instance, in a primary care practice, a physician begins a routine visit with a patient in their mid-50s who has diabetes and hypertension. An ambient AI system seamlessly documents conversations, captures symptoms, updates medications, and generates a clinical note. The physician no longer turns toward a screen. Connection with the patient is essential. The patient speaks openly about fatigue, stress, and concern about long-term health.</p>



<p>Midway through the visit, the electronic record surfaces an AI-generated prompt suggesting an adjustment in therapy based on predictive risk modeling. The physician pauses, not to mindlessly follow the algorithm, but to ask additional questions about daily routine, financial constraints, and willingness to adopt lifestyle changes. Technology informs conversation. It does not replace it.</p>



<p>When the visit ends, documentation is complete, the treatment decision is shared, and the patient leaves with confidence, clarity and a sense of partnership in care. The physician directs the encounter. Technology supports judgment and understanding. The visit feels thoughtful, personal and grounded in relationship.</p>



<p>Now imagine the same technology in a different environment. The documentation remains seamless. The prompts still appear. The system functions efficiently. But here, the pace is set as much by operational demand as by clinical judgement. The schedule tightens. The visit is short. The physician moves quickly from one room to the next, guided less by the patient’s story and more by the system’s tempo. The encounter becomes transactional and compressed. Technology has not changed. What has changed is who is directing the care.</p>



<p>This is the quiet divide now shaping modern medicine. One path preserves physician-directed care, where technology supports human understanding. The other reflects system-directed transaction, where efficiency begins to overshadow the relationship. The difference lies not in the tool but in the priorities that shape its use.</p>



<p>This question of direction is not theoretical. It reflects a deeper shift in how technology may shape human judgment itself. Innovation theorist <a href="https://www.psychologytoday.com/us/contributors/john-nosta">John Nosta,</a> whose work has long been rooted in the health sector and now spans a broader landscape, cautions in his <em>Psychology Today</em> column: <em>“Artificial intelligence is far from neutral, and we need to be careful by calling it simply a tool. By simulating understanding, it may reshape what humans expect from thinking itself. Over time, it can erode the habits required for discernment. And this danger is cumulative. It doesn&#8217;t announce itself as failure. It arrives as convenience.”</em> Nosta is also the author of the upcoming book: <em>The Borrowed Mind—Reclaiming Human Thought in the Age of AI.</em></p>



<h2 class="wp-block-heading"><strong>When Technology Reflects the System Around It</strong></h2>



<p>Technology itself is not the challenge. When developed in partnership with physicians, nurses, and other health professionals, it can be transformative. Many of the most effective innovations emerge when developers observe the realities of care and design tools that strengthen human interaction rather than disrupt it.</p>



<p><a href="https://www.ama-assn.org/about/authors-news-leadership-viewpoints/john-j-whyte-md-mph">John Whyte, MD, MPH, CEO of the American Medical Association</a>, has emphasized that artificial intelligence must support physicians and care teams, not replace clinical judgment, and that technology should strengthen, not weaken, the physician–patient relationship.</p>



<p>A clear example of this tension is emerging in the context of prior authorization. Health professionals and administrative staff often spend more than a dozen hours each week navigating authorization requirements, time taken directly from patient care. <a href="https://www.optum.com/en/about-us/news/page.hub5.ai-powered-digital-prior-authorization.html">New AI-enabled platforms, such as Optum’s Digital Authorization Complete powered by Humata Health</a>, are designed to remove that burden by embedding real-time automation into clinical workflows and reducing manual steps. These innovations restore something invaluable: time.</p>



<p>Now, the deeper question is not technological but human. When time is returned to the system, how will it be allocated to the health professional? Will it allow clinicians to deepen their understanding of patient needs and strengthen their connection? Or will it simply enable the system to see more patients during their shift? The technology is neutral. Its meaning is shaped by people’s intent.</p>



<p>Health care operates within systems shaped by financial and operational pressures. In a transactionally driven environment, even well-intentioned technology can be redirected toward productivity rather than connection. A tool designed to restore time can become a mechanism to increase throughput. A system intended to support thoughtful care can accelerate volume in a fee-for-service environment. Technology inevitably reflects the values and objectives of the system in which it is deployed. It is not the technology that directs decisions and action; it&#8217;s the leadership.</p>



<p>The scale of investment underscores the stakes. The global AI in health market, estimated at roughly $36–39 billion in 2025, is projected to grow substantially in the coming decade. Investment shapes priorities. Priorities shape design. Design shapes experience. And experience shapes trust.</p>



<p>Emerging guidance aligned with the <a href="https://www.ama-assn.org/practice-management/digital-health/augmented-intelligence-medicine">American Medical Association</a> emphasizes that artificial intelligence must remain under meaningful clinical oversight. Technology must support physicians and care teams, not replace judgment or responsibility. Governance, transparency, and continuous evaluation are essential to ensure that technology strengthens patient safety, clinical reasoning, and trust.</p>



<p>This perspective aligns with participatory medicine. <a href="https://drdannysands.com/">Dr. Danny Sands of the Society for Participatory Medicine</a> has described health care not as a service transaction, but as a collaboration between patient and clinician. In that view, technology should support relationship-centered care, not redirect medicine toward system-driven throughput.</p>



<h2 class="wp-block-heading"><strong>The Direction of Care</strong></h2>



<p>Health systems face real pressures: workforce shortages, clinician burnout, chronic disease, and financial strain. These realities demand smarter and more scalable solutions. Artificial intelligence offers meaningful progress. It can detect disease earlier, reduce administrative burden, and support more informed decisions. But efficiency is not healing.</p>



<p>Healing occurs when patients feel understood, supported, and guided by clinicians who have the time and space to listen and respond with care. When technology restores time and that time deepens connection, it fulfills its promise. When reclaimed time becomes additional volume, something essential is diminished.</p>



<p>Artificial intelligence will continue to shape medicine. The deeper question is not whether technology will advance, but who will decide how it is used and for what purpose.</p>



<p>If guided primarily by efficiency, care risks becoming faster but less human. If guided by partnership with physicians and patients, it can restore time to listen, space to understand, and the ability to decide together. Technology is not the healer. People are.</p>



<p>When guided by clarity of purpose, with the patient at the center of effort, and grounded in physician-guided judgment, technology becomes what it was always meant to be: a force that strengthens knowledge, deepens understanding, and restores the bond between physician and patient. Systems matter. They enable scale, coordination, and progress. Yet their purpose is fulfilled only when they serve people. Health care is at its best when human connection and well-designed systems work together in the service of healing.</p>
<p>The post <a href="https://medika.life/who-will-direct-patient-care-physicians-or-technocrats/">Who Will Direct Patient Care: Physicians or Technocrats?</a> appeared first on <a href="https://medika.life">Medika Life</a>.</p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">21571</post-id>	</item>
		<item>
		<title>Constructive Arousal vs. Eliminated Anxiety</title>
		<link>https://medika.life/constructive-arousal-vs-eliminated-anxiety/</link>
		
		<dc:creator><![CDATA[Atefeh Ferdosipour]]></dc:creator>
		<pubDate>Mon, 26 Jan 2026 23:50:20 +0000</pubDate>
				<category><![CDATA[AI Chat GPT GenAI]]></category>
		<category><![CDATA[Digital Health]]></category>
		<category><![CDATA[Editors Choice]]></category>
		<category><![CDATA[For Practitioners]]></category>
		<category><![CDATA[General Health]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[Anxiety]]></category>
		<category><![CDATA[Atefeh Ferdosipour]]></category>
		<category><![CDATA[neuroscience]]></category>
		<category><![CDATA[Performance]]></category>
		<category><![CDATA[Psychology]]></category>
		<category><![CDATA[Public Health]]></category>
		<guid isPermaLink="false">https://medika.life/?p=21537</guid>

					<description><![CDATA[<p>My current mindset for creating a deep connection between technology and humans is based on applying strong theories from behavioral and educational sciences. I still deeply believe that scientific sources, focused research, and solid theories are the best tools available. Since my field of study is educational psychology, and I am especially familiar with learning [&#8230;]</p>
<p>The post <a href="https://medika.life/constructive-arousal-vs-eliminated-anxiety/">Constructive Arousal vs. Eliminated Anxiety</a> appeared first on <a href="https://medika.life">Medika Life</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>My current mindset for creating a deep connection between technology and humans is based on applying strong theories from behavioral and educational sciences. I still deeply believe that scientific sources, focused research, and solid theories are the best tools available.</p>



<p>Since my field of study is educational psychology, and I am especially familiar with learning sciences, I write mostly about them. I believe combining research-based evidence is always more valuable and reliable than relying solely on personal ideas, even if they are logical.</p>



<p>In my writings and articles, I have repeatedly emphasized that sometimes we need to look back and integrate well-established scientific theories with modernity and artificial intelligence. I combine scientific evidence, including research articles and theoretical frameworks, with my own analyses, using them as a bridge to technology.</p>



<p>This approach and strategy prevent many potential risks. Instead of a preachy, rigid, or purely philosophical perspective, we adopt a systematic, scientific approach to derive practical solutions. One of the issues and concerns frequently discussed these days, which I have also mentioned in my recent articles, is the “consequences of excessive ease of performance through artificial intelligence.”In my latest article, I discussed the absence of “Fraction.”</p>



<p>In this article, I do not intend to discuss Fraction directly but rather focus on another challenge in the same area, which is not entirely unrelated to Fraction. This topic is the “level of anxiety and arousal resulting from facing performance.”</p>



<p>First, I will briefly explain this concept and then examine its connection to artificial intelligence systems.</p>



<h2 class="wp-block-heading"><strong>Arousal Theory in Learning Psychology</strong><strong></strong></h2>



<p>One important theory in the neurophysiology of learning is Donald Hebb’s framework, which aligns with evolutionary approaches.</p>



<p>According to these perspectives, the human brain needs challenges to survive. The nervous system has evolved in challenging environments, and both anxiety and an optimal level of arousal have always been essential for survival. They increase alertness against potential risks and guide humans toward growth and the adaptation of necessary skills.</p>



<p>Donald Hebb, a neuroscientist, studied human learning, and one of his significant contributions was explaining the role of arousal in learning.</p>



<p>In Hebb’s framework, “arousal” is considered the fuel for the cerebral cortex to process information. Learning depends on neural plasticity, and this process occurs under an optimal level of arousal.</p>



<p>From this perspective, the brain is not simply trying to reduce tension but is seeking an optimal level of stimulation. If environmental stimuli are too low, the brain may create artificial stimuli or lose part of its natural efficiency.</p>



<p>As a result, neural firing and synaptic strengthening occur under the influence of arousal, and when arousal decreases significantly, the likelihood of forming or strengthening these connections decreases.</p>



<p>In addition to Hebb’s explanation, the classical “Yerkes-Dodson Law” also supports this necessity. According to this law, human performance improves with increasing physiological or mental arousal up to a certain point. When arousal is very low (a state toward which AI tools tend to push us), individuals experience reduced focus and cognitive motivation, and learning efficiency reaches its lowest point. In fact, a certain level of pressure or anxiety is not harmful; it is a prerequisite for achieving peak mental performance.</p>



<h2 class="wp-block-heading"><strong>The “Arousal Gap” Challenge in Interaction with AI</strong></h2>



<p>As briefly explained in Hebb’s framework, the prerequisite for the neural interactions that lead to learning, perception, and cognitive actions is stimulation and arousal.</p>



<p>This moderate level of stimulation, which Hebb calls optimal arousal, is neither unpleasant nor at odds with the brain&#8217;s evolutionary nature in adaptation processes.</p>



<p>Now, imagine that a significant portion of our tasks is performed by an artificial partner and creates no direct cognitive responsibility for the individual. In such a scenario, what challenge will arise in human thinking?</p>



<p>These days, many articles and writings discuss the “excessive ease” challenge posed by AI tools. However, this article specifically focuses on reducing arousal levels and achieving optimal anxiety, according to Donald Hebb&#8217;s framework. Here, anxiety is considered one form of arousal, not equivalent to it entirely.</p>



<p>If most daily tasks are performed without prior stimulation or anxiety and without active cognitive engagement by AI, instead of the tools being under the consumer’s control, the consumer will be under their control.</p>



<p>From an evolutionary perspective, under such conditions, learning and cognitive adaptation processes will not align with the brain’s natural growth patterns, and the likelihood of effective knowledge adaptation will decrease.</p>



<p>The manifestations of this challenge will likely be observed in longitudinal studies as changes in the quality of cognitive performance and in neural circuit activity patterns.</p>



<h2 class="wp-block-heading"><strong>References</strong></h2>



<p>Olson, M. H. &amp; Hergenhahn, B. R. (2020). An Introduction to Theories of Learning (10th ed.). Routledge.&nbsp;</p>



<p>Schachtman, T. R. &amp; Reilly, S. (Eds.). (2011). Associative Learning and Conditioning Theory: Human and Non‑Human Applications. Oxford University Press.</p>
<p>The post <a href="https://medika.life/constructive-arousal-vs-eliminated-anxiety/">Constructive Arousal vs. Eliminated Anxiety</a> appeared first on <a href="https://medika.life">Medika Life</a>.</p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">21537</post-id>	</item>
		<item>
		<title>The Best Dating Game in Health Innovation Happens Just Off the Main Stage</title>
		<link>https://medika.life/the-best-dating-game-in-health-innovation-happens-just-off-the-main-stage/</link>
		
		<dc:creator><![CDATA[Gil Bashe, Medika Life Editor]]></dc:creator>
		<pubDate>Wed, 14 Jan 2026 00:59:58 +0000</pubDate>
				<category><![CDATA[Autoimmune Conditions]]></category>
		<category><![CDATA[Cancers]]></category>
		<category><![CDATA[Cardiovascular]]></category>
		<category><![CDATA[Digital Health]]></category>
		<category><![CDATA[Diseases]]></category>
		<category><![CDATA[Editors Choice]]></category>
		<category><![CDATA[For Doctors]]></category>
		<category><![CDATA[General Health]]></category>
		<category><![CDATA[Genes]]></category>
		<category><![CDATA[Genetic]]></category>
		<category><![CDATA[Healthcare Policy and Opinion]]></category>
		<category><![CDATA[Rare and Orphan Diseases]]></category>
		<category><![CDATA[Rare Disease]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[Biotech Showcase]]></category>
		<category><![CDATA[Briya Health]]></category>
		<category><![CDATA[Courative Inc]]></category>
		<category><![CDATA[Endure Biotherapeutics]]></category>
		<category><![CDATA[Finn Partners]]></category>
		<category><![CDATA[Frezent]]></category>
		<category><![CDATA[IowaiBIO Inc.]]></category>
		<category><![CDATA[JPM]]></category>
		<category><![CDATA[JPMorgan Healthcare Conference]]></category>
		<category><![CDATA[OrisDx]]></category>
		<category><![CDATA[Sideral Therapeutics]]></category>
		<category><![CDATA[SIvEC Biotechnologies]]></category>
		<guid isPermaLink="false">https://medika.life/?p=21531</guid>

					<description><![CDATA[<p>Every January, San Francisco undergoes a transformation. For one week, the city shifts into high gear for the life sciences sector, becoming a dense, walkable ecosystem of ideas, innovation and deal-making. J.P. Morgan Healthcare Week is the catalyst. It draws the world’s largest pharmaceutical companies, institutional investors, policymakers and media into close proximity, turning hotels, [&#8230;]</p>
<p>The post <a href="https://medika.life/the-best-dating-game-in-health-innovation-happens-just-off-the-main-stage/">The Best Dating Game in Health Innovation Happens Just Off the Main Stage</a> appeared first on <a href="https://medika.life">Medika Life</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Every January, San Francisco undergoes a transformation. For one week, the city shifts into high gear for the life sciences sector, becoming a dense, walkable ecosystem of ideas, innovation and deal-making. <a href="https://www.jpmorgan.com/about-us/events-conferences/health-care-conference">J.P. Morgan Healthcare Week</a> is the catalyst. It draws the world’s largest pharmaceutical companies, institutional investors, policymakers and media into close proximity, turning hotels, boardrooms, cafés, and corridors into venues for decisions that will shape the future of medicine and patient care.</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="696" height="613" src="https://i0.wp.com/medika.life/wp-content/uploads/2026/01/JPM.jpg?resize=696%2C613&#038;ssl=1" alt="" class="wp-image-21534" srcset="https://i0.wp.com/medika.life/wp-content/uploads/2026/01/JPM.jpg?resize=1024%2C902&amp;ssl=1 1024w, https://i0.wp.com/medika.life/wp-content/uploads/2026/01/JPM.jpg?resize=300%2C264&amp;ssl=1 300w, https://i0.wp.com/medika.life/wp-content/uploads/2026/01/JPM.jpg?resize=768%2C676&amp;ssl=1 768w, https://i0.wp.com/medika.life/wp-content/uploads/2026/01/JPM.jpg?resize=1536%2C1352&amp;ssl=1 1536w, https://i0.wp.com/medika.life/wp-content/uploads/2026/01/JPM.jpg?resize=150%2C132&amp;ssl=1 150w, https://i0.wp.com/medika.life/wp-content/uploads/2026/01/JPM.jpg?resize=696%2C613&amp;ssl=1 696w, https://i0.wp.com/medika.life/wp-content/uploads/2026/01/JPM.jpg?resize=1068%2C940&amp;ssl=1 1068w, https://i0.wp.com/medika.life/wp-content/uploads/2026/01/JPM.jpg?w=1656&amp;ssl=1 1656w, https://i0.wp.com/medika.life/wp-content/uploads/2026/01/JPM.jpg?w=1392&amp;ssl=1 1392w" sizes="(max-width: 696px) 100vw, 696px" data-recalc-dims="1" /><figcaption class="wp-element-caption">Photo Credit: Author &#8211; The Westin St. Francis may be the nucleus for the nation&#8217;s biggest gathering of health innovation, but the conversation is not confined to the St. Francis. The city becomes a &#8220;movable feast&#8221; for engagement.</figcaption></figure>



<p>The gravitational pull is unmistakable. The Westin St. Francis remains the symbolic center of power, where scale dominates the conversation and capital moves in large increments. However, innovation, from the concept of a molecule or engineering marvel, rarely begins at scale. It starts with a question, a patient-care frustration, a molecular insight and a small group of people willing to compress years of work into minutes of explanation.</p>



<p>That is why the <a href="https://informaconnect.com/biotech-showcase/">Biotech Showcase</a> matters. It’s why it continues to thrive just off the main stage. Like off-Broadway, this is where blockbusters are discovered.</p>



<h2 class="wp-block-heading"><strong>Seven Minutes to Be Understood</strong></h2>



<p>I spent part of the day sitting in one room at the Biotech Showcase, listening to a succession of rapid-fire presentations, each lasting seven minutes per company. The room was only half full, but it was intensely attentive. This was not casual listening. This was evaluative listening.</p>



<p>Companies including <a href="https://www.orisdx.com/">OrisDx</a>, <a href="https://www.iowabio.org/">IowaiBIO Inc</a>., <a href="https://endurebio.com/">Endure Biotherapeutics</a>, <a href="https://www.sivecbiotechnologies.com/">SIvEC Biotechnologies</a>, <a href="https://www.frezent.com/">Frezent</a>, <a href="https://siderealtx.com/">Sideral Therapeutics</a>, Courative Inc., and others each delivered a tightly constructed narrative of carefully curated slides: the unmet clinical need, the scientific or molecular approach, progress to date and the precise inflection point ahead. Most importantly, resources needed for the next stage of development.</p>



<p>What made these presentations compelling was not polish, it was clarity. There was no time to hide behind jargon or aspiration. Seven minutes forces discipline. It reveals whether a team truly understands its own story. For investors or biopharma partners in the room, it quickly answers the most important question: <em>Is this something I want to continue discussing?</em></p>



<p>That is the essence of a productive dating game. Not every conversation leads to a match, but the right ones unmistakably spark an attraction.</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="696" height="522" src="https://i0.wp.com/medika.life/wp-content/uploads/2026/01/Biotech-Showcase.jpg?resize=696%2C522&#038;ssl=1" alt="" class="wp-image-21533" srcset="https://i0.wp.com/medika.life/wp-content/uploads/2026/01/Biotech-Showcase-scaled.jpg?resize=1024%2C768&amp;ssl=1 1024w, https://i0.wp.com/medika.life/wp-content/uploads/2026/01/Biotech-Showcase-scaled.jpg?resize=300%2C225&amp;ssl=1 300w, https://i0.wp.com/medika.life/wp-content/uploads/2026/01/Biotech-Showcase-scaled.jpg?resize=768%2C576&amp;ssl=1 768w, https://i0.wp.com/medika.life/wp-content/uploads/2026/01/Biotech-Showcase-scaled.jpg?resize=1536%2C1152&amp;ssl=1 1536w, https://i0.wp.com/medika.life/wp-content/uploads/2026/01/Biotech-Showcase-scaled.jpg?resize=2048%2C1536&amp;ssl=1 2048w, https://i0.wp.com/medika.life/wp-content/uploads/2026/01/Biotech-Showcase-scaled.jpg?resize=150%2C113&amp;ssl=1 150w, https://i0.wp.com/medika.life/wp-content/uploads/2026/01/Biotech-Showcase-scaled.jpg?resize=696%2C522&amp;ssl=1 696w, https://i0.wp.com/medika.life/wp-content/uploads/2026/01/Biotech-Showcase-scaled.jpg?resize=1068%2C801&amp;ssl=1 1068w, https://i0.wp.com/medika.life/wp-content/uploads/2026/01/Biotech-Showcase-scaled.jpg?resize=1920%2C1440&amp;ssl=1 1920w, https://i0.wp.com/medika.life/wp-content/uploads/2026/01/Biotech-Showcase-scaled.jpg?w=1392&amp;ssl=1 1392w" sizes="(max-width: 696px) 100vw, 696px" data-recalc-dims="1" /><figcaption class="wp-element-caption">Photo Credit: Author &#8211; Biotech Showcase is a community of innovation &#8211; whether in the ballrooms, meeting halls, or lobby, conversation flows around what&#8217;s next.</figcaption></figure>



<h2 class="wp-block-heading"><strong>Why This Room Exists at All</strong></h2>



<p>The Biotech Showcase works because it understands timing and intent. Seed and early-stage companies do not come to San Francisco in January to compete with global pharmaceutical announcements. They come because the people who can change their trajectory are already in the city and already thinking about what comes next.</p>



<p>J.P. Morgan Healthcare Week is where the industry takes stock of itself. Large companies outline business plan priorities. Investors recalibrate portfolios. Strategies are stress-tested. In that context, the Biotech Showcase becomes a natural counterbalance: a place where emerging science is introduced not as speculation, but as possibility.</p>



<p>There is also quiet wisdom in the Showcase’s decision to record and share presentations after the event. In a week where schedules overlap and choices are constant, the ability to revisit a story matters. Conversations that begin in a room can continue weeks later, grounded in something concrete and lasting. That continuity is how relationships form—and how trust accumulates.</p>



<h2 class="wp-block-heading"><strong>The City Becomes the Platform</strong></h2>



<p>What is easy to overlook from the outside is how completely San Francisco itself becomes part of the infrastructure during this week. Beyond the formal stages, firms across the ecosystem host companies in nearby venues, creating dozens of smaller hubs within walking distance of one another.</p>



<p>At places like the Marines’ Memorial Club, companies are hosted quietly and efficiently, often fifteen or so at a time, by firms such as <a href="https://www.finnpartners.com/">FINN Partners</a>, alongside others working behind the scenes to support emerging science during the week. During the course of J.P. Morgan Week, these companies may hold more than 200 conversations with analysts, investors, and media representatives. No banners. No spectacle. Just focused, purposeful, personalized dialogue.</p>



<p>This distributed model works because it mirrors how decisions are actually made, not in a single dramatic moment, but through repeated, informed exchanges that foster knowledge and confidence.</p>



<p>When the day winds down, the city shifts again. Evenings during J.P. Morgan Week are reserved for receptions hosted by banks, global companies, industry groups, and even trade commissions from countries such as the UK, including the <a href="https://www.bioindustry.org/">UK Bioindustry Association</a>. These gatherings are not afterthoughts. They are where formality loosens, where introductions give way to relationships, and where ideas heard earlier in the day are tested in conversation. Science meets context. Strategy meets personality.</p>



<h2 class="wp-block-heading"><strong>When AI Enters the Dating Pool</strong></h2>



<p>One of the most notable developments this year is the growing presence of AI companies entering this ecosystem alongside emerging biotech companies—firms such as <a href="https://briya.com/">Briya.Health</a> demonstrates how AI is no longer merely orbiting the life sciences; it is now deeply embedded within them.</p>



<p>Early-stage biotech is data-rich and time-poor. They generate complex, unstructured information long before scale or certainty arrives. AI platforms that can surface insight, reduce friction, and accelerate decision-making change the nature of early collaboration.</p>



<p>When AI innovators and biotech founders encounter one another during this week—often in the same rooms, at the same receptions, and in the same corridors—the conversation accelerates. What might have taken months of coordination elsewhere can happen organically here. That is not a coincidence. It is designed.</p>



<h2 class="wp-block-heading"><strong>Why This Week Still Matters</strong></h2>



<p>Events like the Biotech Showcase, alongside complementary forums such as <a href="https://1businessworld.com/2026/01/global-bioinnovation-forum/global-bioinnovation-forum-shaping-the-future-of-health/">1BusinessWorld’s Global BioInnovation Forum</a>, emerge because they recognize how innovation actually drives progress. They realize that timing matters: place matters and proximity matters.</p>



<p>These gatherings do not compete with J.P. Morgan Healthcare Week; they complete it. Together, they create a comprehensive view of the health innovation lifecycle, from initial insight to global execution.</p>



<p>What I witnessed in that half-filled room was not hype. It was intent. Seven minutes at a time, company after company made a case—not just for funding, but for belief.</p>



<p>That is why the Biotech Showcase remains exactly what its name promises: a showcase of possibilities. And why, in the great dating game of health innovation, does it remain one of the most honest and productive places to begin?</p>
<p>The post <a href="https://medika.life/the-best-dating-game-in-health-innovation-happens-just-off-the-main-stage/">The Best Dating Game in Health Innovation Happens Just Off the Main Stage</a> appeared first on <a href="https://medika.life">Medika Life</a>.</p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">21531</post-id>	</item>
		<item>
		<title>Why Biological Learning Demands the Friction We Seek to Delete?</title>
		<link>https://medika.life/why-biological-learning-demands-the-friction-we-seek-to-delete/</link>
		
		<dc:creator><![CDATA[Atefeh Ferdosipour]]></dc:creator>
		<pubDate>Wed, 07 Jan 2026 18:47:31 +0000</pubDate>
				<category><![CDATA[AI Chat GPT GenAI]]></category>
		<category><![CDATA[Digital Health]]></category>
		<category><![CDATA[Editors Choice]]></category>
		<category><![CDATA[Ethics in Practice]]></category>
		<category><![CDATA[Policy and Practice]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[Atefeh Ferdosipour]]></category>
		<category><![CDATA[Behaviorial Health]]></category>
		<category><![CDATA[Fiction-Based AI]]></category>
		<category><![CDATA[LLMs]]></category>
		<category><![CDATA[Skinner]]></category>
		<guid isPermaLink="false">https://medika.life/?p=21516</guid>

					<description><![CDATA[<p>This short piece, as always, is born out of my passion for studying how theories can help us use Artificial Intelligence more effectively. I believe now more than ever that without interdisciplinary research, we won’t be able to logically face the challenges of the Cognitive Age. Systematically speaking, the key to identifying challenges lies in [&#8230;]</p>
<p>The post <a href="https://medika.life/why-biological-learning-demands-the-friction-we-seek-to-delete/">Why Biological Learning Demands the Friction We Seek to Delete?</a> appeared first on <a href="https://medika.life">Medika Life</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>This short piece, as always, is born out of my passion for studying how theories can help us use <em>Artificial Intelligence</em> more effectively. I believe now more than ever that without interdisciplinary research, we won’t be able to logically face the challenges of the Cognitive Age.</p>



<p>Systematically speaking, the key to identifying challenges lies in examining fundamental issues, not just their consequences. For example, if we want to fix the flaws in the learning process, we must first redefine the roots of deep learning and its underlying mechanics. We may even need to redefine them repeatedly to understand how to solve the problems arising from mind-based technologies.</p>



<p>Let me explain what I mean through one of the most debated topics of our time: the mental laziness caused by the way <em>AI</em> is rewriting our brain&#8217;s habits. To understand this, we need to look at the dynamics of deep learning in the brain. By grasping this process through interdisciplinary research, we might find ways to make <em>AI</em> learning feel more like natural deep learning.</p>



<p>The goal isn&#8217;t just to know the biochemistry of cells. Before looking at what happens inside an organism, we should ask:</p>



<p>Why do we usually prefer learning through <em>AI</em> over the effortful, traditional human way?</p>



<p>You might say the answer is obvious: because learning with technology is effortless and fast.</p>



<p>As a learning specialist, I’d like to answer this from a theoretical perspective.</p>



<p>&nbsp;First, we must accept a reality: Human deep learning is naturally a challenging process. It is fundamentally different from the vast amounts of data we consume today through formal or informal education assisted by <em>LLMs</em>.</p>



<h2 class="wp-block-heading">The Logic of Immediate Reward: From Skinner to the Present</h2>



<p>There is strong research showing that learners prefer a small, immediate reward over a larger, delayed one. This was first highlighted by B.F. <em>Skinner</em> (1953), the pioneer of operant conditioning.&nbsp;(I’ve previously written about how this connects to <em>AI</em>. )</p>



<p>Later, others expanded on this effortless reward preference. In short, according to the behavioral economics of Skinner’s theory, humans look for shortcuts.&nbsp;</p>



<p>AI is currently the ultimate shortcut, giving the best answer in seconds without any real struggle. From this view, it’s not just about the mind; it’s about behavioral economics.</p>



<p>A behavior that leads to a quick reward will always be repeated.</p>



<p><em>Richard</em> <em>Herrnstein</em> (1961), a student of Skinner&#8217;s, developed a mathematical formula called the Matching Law. He showed that organisms don&#8217;t just look at one reward; they choose between options. If given two choices, a living being will put its energy into the one that pays off faster and more directly. </p>



<p>In <em>behavioral</em> <em>economics</em>, this <span style="box-sizing: border-box; margin: 0px; padding: 0px;">phenomenon is known as <em>temporal</em> <em>discounting</em></span> (<em>Ainslie</em>, 1975). The value of a reward drops the longer you have to wait for it. Simply put, the reward loses its shine in the organism&#8217;s mind because it requires patience.</p>



<p>We <span style="box-sizing: border-box; margin: 0px; padding: 0px;">observe this phenomenon every day with <em>AI</em> users, particularly those utilizing</span> <em>ChatGPT</em>. Students, for instance, might feel that spending hours writing a thesis is stupid or inefficient when they can get an answer in a split second. They don&#8217;t just feel productive; they feel smart for bypassing the effort. </p>



<p>Even if you tell them that the struggle is what actually builds their brain, they often won&#8217;t listen. They choose the immediate payout over the long-term value. </p>



<p><em>Evolutionary</em> <em>psychology</em> explains this too: an immediate reward is guaranteed, while a future one is uncertain. Since we are wired for survival, we grab what’s available now.</p>



<p>Brain Biochemistry and the <em>Deep</em> <em>Learning</em> <em>Process</em></p>



<p>When we learn something deeply, three key things happen at a neurological level:</p>



<ol>
<li>Exposure to New Information: The nervous system makes its first contact with data for which it has no existing pattern.</li>
</ol>



<p>2. Cognitive Load: This is that stuck feeling when a mental process is harder than expected. It’s the effort the brain needs to process unfamiliar data (Sweller, 1988). This friction is essential.</p>



<p>3. Processing and Protein Synthesis: If the information is processed correctly, chemical signals trigger the creation of proteins that physically change the brain&#8217;s structure to store that knowledge (Kandel, 2001).</p>



<p>This is why sleep is so vital. Most of this protein synthesis happens while we rest.&nbsp;</p>



<p>One of the most beautiful parts of learning is when we stop thinking about a problem, but our brain keeps working on it.&nbsp;</p>



<p>Through the Default Mode Network or DMN (Raichle, 2015), the brain makes random, creative connections. This is where true creativity is born.</p>



<h2 class="wp-block-heading">Toward Friction-Based AI</h2>



<p>If deep learning is the result of protein synthesis triggered by challenge, then the paradox of modern AI is clear: By removing the friction, technology is removing the learning.&nbsp;</p>



<p>We are facing a biological crisis where human brains, instead of producing genius and problem-solving skills, are becoming mere terminals for receiving quick hits of dopamine.</p>



<p>My proposal is simple: How can we turn AI from a passive answer-giver into a Cognitive Challenging Provocateur? </p>



<p>We need to design models that don&#8217;t bypass cognitive load but manage it in a personalized way.&nbsp;</p>



<p>I call this Friction-based AI; a model where algorithms are programmed not for the shortest path, but for the most effective learning path. This is an open invitation to researchers, neuroscientists, and AI architects to collaborate on this new paradigm. My ideas are ready to be turned into actionable proposals.</p>



<p>As a final note, I believe the way we interact with AI is a skill in itself. Even if everyone has the same tools, the results aren&#8217;t equal. Efficiency depends on the how.&nbsp;</p>



<p>I am currently developing a startup idea to address these exact challenges in EdTech.It’s EdTechxDr. Atefeh F.</p>



<h2 class="wp-block-heading">References</h2>



<p>• Ainslie, G. (1975). Specious reward: A behavioral theory of impulsiveness and impulse control. Psychological Bulletin.</p>



<p>• Herrnstein, R. J. (1961). Relative prevalence of response in relation to the relative frequency of reinforcement. Journal of the Experimental Analysis of Behavior.</p>



<p>• Kandel, E. R. (2001). The Molecular Biology of Memory Storage: A Dialogue Between Genes and Synapses. Science.</p>



<p>• Raichle, M. E. (2015). The Brain&#8217;s Default Mode Network. Annual Review of Neuroscience.</p>



<p>• Skinner, B. F. (1953). Science and Human Behavior. Simon and Schuster.</p>



<p>• Sweller, J. (1988). Cognitive Load During Problem Solving: Effects on Learning. Cognitive Science.</p>
<p>The post <a href="https://medika.life/why-biological-learning-demands-the-friction-we-seek-to-delete/">Why Biological Learning Demands the Friction We Seek to Delete?</a> appeared first on <a href="https://medika.life">Medika Life</a>.</p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">21516</post-id>	</item>
		<item>
		<title>AI in 2026 – Boom, Bust or Backlash in Healthcare?</title>
		<link>https://medika.life/ai-in-2026-boom-bust-or-backlash-in-healthcare/</link>
		
		<dc:creator><![CDATA[Tom Lawry]]></dc:creator>
		<pubDate>Wed, 07 Jan 2026 18:29:01 +0000</pubDate>
				<category><![CDATA[AI Chat GPT GenAI]]></category>
		<category><![CDATA[Digital Health]]></category>
		<category><![CDATA[Editors Choice]]></category>
		<category><![CDATA[Ethics in Practice]]></category>
		<category><![CDATA[General Health]]></category>
		<category><![CDATA[Policy and Practice]]></category>
		<category><![CDATA[Public Health]]></category>
		<category><![CDATA[TeleHealth]]></category>
		<category><![CDATA[Trending Issues]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[GenAI]]></category>
		<category><![CDATA[Generative AL]]></category>
		<category><![CDATA[Hacking Health Care]]></category>
		<category><![CDATA[Health Care Nation]]></category>
		<category><![CDATA[HIMSS]]></category>
		<category><![CDATA[Tom Lawry]]></category>
		<guid isPermaLink="false">https://medika.life/?p=21510</guid>

					<description><![CDATA[<p>It was the fall of 2022 when large language models and Generative AI burst out of research labs and onto Main Street. Since then, every day seems to bring another AI breakthrough that challenges how work gets done. In my role advising organizations on AI strategy and deployments, I see a consistent pattern among healthcare [&#8230;]</p>
<p>The post <a href="https://medika.life/ai-in-2026-boom-bust-or-backlash-in-healthcare/">AI in 2026 – Boom, Bust or Backlash in Healthcare?</a> appeared first on <a href="https://medika.life">Medika Life</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="478" height="79" src="https://i0.wp.com/medika.life/wp-content/uploads/2026/01/Tom-Lawry-Pic-2.png?resize=478%2C79&#038;ssl=1" alt="" class="wp-image-21513" srcset="https://i0.wp.com/medika.life/wp-content/uploads/2026/01/Tom-Lawry-Pic-2.png?w=478&amp;ssl=1 478w, https://i0.wp.com/medika.life/wp-content/uploads/2026/01/Tom-Lawry-Pic-2.png?resize=300%2C50&amp;ssl=1 300w, https://i0.wp.com/medika.life/wp-content/uploads/2026/01/Tom-Lawry-Pic-2.png?resize=150%2C25&amp;ssl=1 150w" sizes="(max-width: 478px) 100vw, 478px" data-recalc-dims="1" /></figure>



<p>It was the fall of 2022 when large language models and Generative AI burst out of research labs and onto Main Street. Since then, every day seems to bring another AI breakthrough that challenges how work gets done.</p>



<p>In my role advising organizations on AI strategy and deployments, I see a consistent pattern among healthcare leaders: excitement about what AI could unlock, paired with exhaustion from the volume of noise, pressure, and competing claims.</p>



<h2 class="wp-block-heading"><strong><em>Welcome to 2026.</em></strong></h2>



<p>As predictions flood inboxes and social feeds, focused on what AI <em>might</em> do next, I want to ground the conversation in something more useful. Rather than forecasting outcomes, let’s focus on three forces already at work—forces that will determine whether AI delivers real value in healthcare or quietly stalls.</p>



<p>Will 2026 be a year of boom, bust, or backlash?</p>



<p>The honest answer is yes.</p>



<h2 class="wp-block-heading"><strong>Boom: Early Wins—and an AI Arms Race</strong></h2>



<p>Let’s start with what’s working.</p>



<p>Healthcare is seeing real, if narrow, gains from AI:</p>



<ul>
<li>Ambient documentation reduces administrative burden</li>



<li>Imaging and pathology tools iare mproving speed and consistency</li>



<li>Operational and revenue cycle applications driving incremental efficiency</li>
</ul>



<p>These are not moonshots. They are targeted solutions addressing specific pain points. And they matter.</p>



<p>At the same time, healthcare is now firmly in an AI arms race.</p>



<p>Every EHR vendor, medical device company, life sciences firm, and digital health startup is racing to declare itself “AI-native.” Roadmaps are packed with copilots, assistants, agents, and automation claims. No vendor wants to be perceived as falling behind.</p>



<p>That pressure is accelerating innovation—but it’s also compressing timelines, encouraging over-promising, and pushing organizations to adopt faster than they can realistically absorb.</p>



<p>Boom energy is real.</p>



<p>But it is also uneven and fragile.</p>



<p><strong>Prediction:</strong> Within two years, most AI used in provider organizations will arrive embedded inside core systems and devices already in use. Intelligence will not be something teams “add on”; it will be something they inherit.</p>



<p><strong>Recommendation: </strong>Understand where AI is already embedded across your vendor ecosystem and what’s coming next. Engage early through advisory councils or pilots. Engage and prepare clinicians before introducing these capabilities into workflows. AI should never arrive as a surprise.</p>



<h2 class="wp-block-heading"><strong>Bust: When Pilots Multiply, but Value Doesn’t</strong></h2>



<p>Generative AI has dominated innovation agendas, yet only a fraction of pilots ever reach sustained production. A survey cited by MIT reports that roughly <strong>95% of business AI pilots fail to generate measurable returns.</strong></p>



<p>This is not evidence that AI lacks value.</p>



<p>It is evident that many organizations lack discipline.</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="696" height="420" src="https://i0.wp.com/medika.life/wp-content/uploads/2026/01/image.jpeg?resize=696%2C420&#038;ssl=1" alt="" class="wp-image-21511" srcset="https://i0.wp.com/medika.life/wp-content/uploads/2026/01/image.jpeg?resize=1024%2C618&amp;ssl=1 1024w, https://i0.wp.com/medika.life/wp-content/uploads/2026/01/image.jpeg?resize=300%2C181&amp;ssl=1 300w, https://i0.wp.com/medika.life/wp-content/uploads/2026/01/image.jpeg?resize=768%2C464&amp;ssl=1 768w, https://i0.wp.com/medika.life/wp-content/uploads/2026/01/image.jpeg?resize=150%2C91&amp;ssl=1 150w, https://i0.wp.com/medika.life/wp-content/uploads/2026/01/image.jpeg?resize=696%2C420&amp;ssl=1 696w, https://i0.wp.com/medika.life/wp-content/uploads/2026/01/image.jpeg?resize=1068%2C645&amp;ssl=1 1068w, https://i0.wp.com/medika.life/wp-content/uploads/2026/01/image.jpeg?w=1274&amp;ssl=1 1274w" sizes="(max-width: 696px) 100vw, 696px" data-recalc-dims="1" /></figure>



<p>High failure rates are normal in early markets. Technology matures. Tools improve. But value only materializes when leaders focus on fundamentals: design, data readiness, workflow integration, and ownership.</p>



<p>Most AI initiatives fail not because the technology doesn’t work, but because success is never clearly defined. Projects are launched out of curiosity, vendor pressure, or fear of being left behind. Clinical impact, operational accountability, and economic value are clarified too late—if at all.</p>



<p>Equally damaging is the underestimation of the human systems AI enters. Healthcare work is relational, regulated, and trust-dependent. When AI is introduced without redesigning workflows, preparing staff, or clarifying responsibility, it creates friction—not relief. Adoption then stalls quietly.</p>



<p><strong>Prediction:</strong> In 2026, organizations will run fewer AI pilots—but with much higher expectations. Boards and executives will require clearer evidence of clinical, workforce, or financial value before approving new initiatives.</p>



<p><strong>Recommendation:</strong> Move from “fail fast” to “fail before you scale.” Define success upfront, assign ownership early, and redesign workflows in tandem with technology. AI initiatives without a credible path to value should be halted immediately<strong>.</strong></p>



<h2 class="wp-block-heading"><strong>Backlash: Fear, Workforce Anxiety, and the Trust Gap</strong></h2>



<p>The most underestimated force shaping AI’s trajectory in 2026 is neither technical nor financial.</p>



<p>It’s human.</p>



<p>History offers context. When automobiles first appeared, they were seen as dangerous and socially disruptive. Red Flag laws required people to walk ahead of vehicles waving flags and capped speeds at just a few miles per hour. These laws weren’t about innovation—they were about fear, control, and adjustment.</p>



<p>Healthcare AI is entering a similar phase.</p>



<p>Workforce research shows healthcare workers are among the most cautious about AI adoption, citing concerns about trust, transparency, and job impact. This caution is not irrational. Healthcare has a long history of technology being imposed rather than co-designed.</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="696" height="317" src="https://i0.wp.com/medika.life/wp-content/uploads/2026/01/image-1.jpeg?resize=696%2C317&#038;ssl=1" alt="" class="wp-image-21512" srcset="https://i0.wp.com/medika.life/wp-content/uploads/2026/01/image-1.jpeg?resize=1024%2C467&amp;ssl=1 1024w, https://i0.wp.com/medika.life/wp-content/uploads/2026/01/image-1.jpeg?resize=300%2C137&amp;ssl=1 300w, https://i0.wp.com/medika.life/wp-content/uploads/2026/01/image-1.jpeg?resize=768%2C350&amp;ssl=1 768w, https://i0.wp.com/medika.life/wp-content/uploads/2026/01/image-1.jpeg?resize=150%2C68&amp;ssl=1 150w, https://i0.wp.com/medika.life/wp-content/uploads/2026/01/image-1.jpeg?resize=696%2C317&amp;ssl=1 696w, https://i0.wp.com/medika.life/wp-content/uploads/2026/01/image-1.jpeg?resize=1068%2C487&amp;ssl=1 1068w, https://i0.wp.com/medika.life/wp-content/uploads/2026/01/image-1.jpeg?w=1174&amp;ssl=1 1174w" sizes="(max-width: 696px) 100vw, 696px" data-recalc-dims="1" /></figure>



<p>As a result, scrutiny is increasing—particularly from labor organizations and state legislators. Recent bills, including those limiting AI’s role in clinical decision-making and licensed practice, reflect not anti-innovation sentiment, but unresolved trust and knowledge gaps.</p>



<p>Innovation does not scale without trust.</p>



<p>In 2026, AI scrutiny will intensify, especially with labor organizations and at the state legislative level.</p>



<p>As I write this, the Chair of the New York State Senate Committee on Internet and Technology just introduced a bill (S7263) to “protect patients and front-line care workers from the adverse effects of AI tools in risky or untested settings.”&nbsp; The bill prohibits chatbots from performing the duties of licensed nurses and puts strong guardrails around the use of AI in healthcare settings.”</p>



<p>I often write about the need for a balanced approach to defining both the “gas and guardrails” that guide AI’s use in health and medicine. Incentives and safeguards are equally important.</p>



<p><strong>Prediction</strong>: Expect increased legislative activity and labor engagement around AI in healthcare throughout 2026. Such actions should not be dismissed simply as anti-innovation. They reflect something deeper: a trust and knowledge gap that needs to be closed.</p>



<p><strong>Recommendation: </strong>Create durable AI value by investing in workforce and consumer education. Clinicians need clarity—not just on how AI works, but on how it supports professional judgment rather than replaces it.</p>



<h2 class="wp-block-heading"><strong>From Awe to Analytical</strong></h2>



<p>The year ahead will test the resolve of leadership. Transformation in healthcare is rarely linear—and never clean.</p>



<p>Vendors will continue to showcase breakthroughs. The hype will continue. But 2026 is not the year for cheerleading.</p>



<p>It is the year for realism.</p>



<p>The most effective leaders are moving from awe to analysis—recognizing that AI value does not come from the technology itself, but from the opportunity it creates to rethink how work gets done.</p>



<p>In that sense, AI value is—and always will be—a uniquely human process.</p>
<p>The post <a href="https://medika.life/ai-in-2026-boom-bust-or-backlash-in-healthcare/">AI in 2026 – Boom, Bust or Backlash in Healthcare?</a> appeared first on <a href="https://medika.life">Medika Life</a>.</p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">21510</post-id>	</item>
		<item>
		<title>What 2025 Taught Us and What 2026 Will Demand</title>
		<link>https://medika.life/what-2025-taught-us-and-what-2026-will-demand/</link>
		
		<dc:creator><![CDATA[Atefeh Ferdosipour]]></dc:creator>
		<pubDate>Wed, 24 Dec 2025 00:30:15 +0000</pubDate>
				<category><![CDATA[AI Chat GPT GenAI]]></category>
		<category><![CDATA[Digital Health]]></category>
		<category><![CDATA[Editors Choice]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Atefeh Ferdosipour]]></category>
		<category><![CDATA[Digital]]></category>
		<category><![CDATA[GenAI]]></category>
		<category><![CDATA[Human]]></category>
		<category><![CDATA[LLMs]]></category>
		<category><![CDATA[Mindful]]></category>
		<guid isPermaLink="false">https://medika.life/?p=21497</guid>

					<description><![CDATA[<p>It is impossible to talk about and predict the future without considering past events. Therefore, in this brief article, as I did last year, I will attempt to compare the events of 2025 with those of 2026. The primary goal is not a quick glance, but a brief analysis to identify potential gaps. Because we [&#8230;]</p>
<p>The post <a href="https://medika.life/what-2025-taught-us-and-what-2026-will-demand/">What 2025 Taught Us and What 2026 Will Demand</a> appeared first on <a href="https://medika.life">Medika Life</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>It is impossible to talk about and predict the future without considering past events. Therefore, in this brief article, as I did last year, I will attempt to compare the events of 2025 with those of 2026. The primary goal is not a quick glance, but a brief analysis to identify potential gaps. Because we all know that without understanding the problem, it will be impossible to find possible solutions.</p>



<p>As the title of the article suggests, this comparison and analysis focuses on developments in the digital world and the major changes that artificial intelligence brought about in the past year. The other part of the article examines the effects these technologies may have on human life and the world around us in the coming year. Finally, I will refer to the gap that emerged in my thinking and the solution I reached after months of study.</p>



<h2 class="wp-block-heading"><strong>The evolution of the digital world in 2025</strong><strong></strong></h2>



<p>In 2025, artificial intelligence transitioned from an emerging technology to the primary infrastructure of the digital economy. Massive investments, powerful multimodal models, and the rapid penetration of AI into healthcare, education, and everyday life made 2025 a turning point in the history of technology. Below is a brief overview of the most important developments.</p>



<ol start="1">
<li>In 2025, Google’s educational division, Gemini for Education, officially reached more than 10 million students across over 1,000 institutions in the United States.</li>



<li>Google introduced more than 150 new features, including quizzes, flashcards, and other learning tools for teachers and students. As a result, artificial intelligence—at least in some countries—is no longer merely a research project but has become part of everyday academic life.</li>



<li>Google and the United Arab Emirates have launched a public education initiative called AI for All, aimed at empowering students, teachers, and small businesses with AI literacy and skills.</li>



<li>Greece signed a memorandum of understanding with OpenAI to introduce an educational version of AI, ChatGPT Edu, into schools, signaling that not only companies but also governments are integrating AI into national education systems.</li>



<li>The 2025 EdTech Industry Report indicates that online learning platforms, VR/AR technologies, personalized learning, data-driven education, and AI-powered tools have become part of the mainstream education ecosystem. The convergence of technology, learning, and AI is no longer a temporary trend but a defining direction of the education industry.</li>



<li>From a regulatory perspective, the European Union, the United States, China, and other countries passed new legislation addressing transparency, risk management, model accountability, and data security.</li>
</ol>



<h2 class="wp-block-heading"><strong>AI-driven transformations in education</strong><strong></strong></h2>



<p>When focusing specifically on education, these developments can be summarized as follows:</p>



<ol start="1">
<li>Full integration of AI into teaching and classrooms, including content generation, assessment design, homework evaluation, slide creation, and automated coaching in many schools and universities.</li>



<li>Personalized learning, with individual learning paths determined based on learners’ performance and behavioral data.</li>



<li>Expansion of VR/AR and immersive learning environments, such as virtual laboratories, realistic educational visits, and scientific or historical simulations.</li>



<li>A changing role for educators, shifting from learning designers and content providers to facilitators, mentors, and guides of the learning process.</li>



<li>Teaching digital literacy skills, including critical thinking, awareness of algorithmic bias, and effective human–machine collaboration.</li>



<li>Greater inclusion and equity, through AI-supported tools for learners with special needs and improved access for underserved regions.</li>



<li>Growth of skills-based education, with short-term online programs expanding alongside traditional universities and increased emphasis on labor-market-relevant skills.</li>
</ol>



<h2 class="wp-block-heading"><strong>Country competition and regional trends</strong><strong></strong></h2>



<p>Understanding the pace of AI-driven technological change from a geographical perspective provides insight into both current developments and emerging global competition. In 2025, regional trends were shaped as follows:</p>



<ol start="1">
<li>In Europe, regulations became more stringent, and practical guidelines were introduced to ensure transparency and safety in AI systems. Countries such as Finland, Estonia, and France took leading roles in standardizing teacher training and the safe integration of AI in education.</li>



<li>In Asia, South Korea, China, India, and Singapore experienced significant growth, particularly in applying AI within schools and national education programs. South Korea, Japan, and Singapore emerged as pioneers in personalized learning and smart classroom technologies.</li>



<li>The United States remained a leader in edtech innovation, infrastructure development, and university-led workforce training in AI. The U.S., China, and India also accounted for the largest investments and the highest number of leading edtech companies.</li>



<li>In the Middle East, the UAE and Saudi Arabia made substantial investments in smart schools and national AI-driven education initiatives.</li>



<li>Several African countries and other developing regions focused on leveraging AI to expand affordable and equitable access to education.</li>
</ol>



<h2 class="wp-block-heading"><strong>Possible developments in 2026</strong><strong></strong></h2>



<p>Past developments often make future trends partially predictable. This predictability enables more effective planning and strategic decision-making, as well as earlier identification of potential risks. Based on this perspective, several key developments may shape 2026.</p>



<ol start="1">
<li>Unlike the highly enthusiastic and innovation-driven years of recent AI expansion, 2026 is likely to place a stronger emphasis on human responsibility. While 2025 was largely defined by competition in production, innovation, and the widespread application of AI, emerging gaps and challenges may prompt experts—particularly in technology and education—to adopt more human-centered approaches, ethical standards, and intelligent, restrained use of AI. The focus may shift from mere adoption and digitalization toward deeper engagement with the human mind and new perspectives on meaningful learning.</li>



<li>In a previous article published in this same media outlet, I argued that artificial intelligence would increasingly take on a mentoring role. This trend became visible in 2025 and is expected to intensify in 2026. I believe that AI systems can function as self-regulating psychological support for the human mind and encourage deeper thinking. However, this process requires clear prerequisites. When grounded appropriately in psychological principles, particularly within learning environments, two-way cognitive engagement between humans and AI can be significantly strengthened. This highlights the necessity of applying cognitive and behavioral psychology in the design of learning environments and intelligent systems. This line of thinking has also informed the development of my current research-oriented startup project, details of which I have discussed in another article published in the same media.</li>



<li>Another major issue is deep personalization of learning. While personalization was already considered important in AI-supported learning in 2025, it will become mandatory in 2026. Advanced educational systems based on large language models must increasingly account for learners’ cognitive load, motivation, emotional states, and cultural backgrounds. Uniform education models will be ineffective in the age of AI. This challenge has been a core motivation behind the design of my current project.</li>
</ol>



<h2 class="wp-block-heading"><strong>Challenges and requirements in the age of artificial intelligence</strong><strong></strong></h2>



<p>Considering the developments discussed above, several major challenges are likely to persist or intensify.</p>



<ol start="1">
<li>The risk of weakening independent thinking remains a serious concern. Overreliance on AI technologies and excessive consumption of AI-generated outputs may reduce the perceived importance of higher-order cognitive skills such as critical thinking, creativity, and problem-solving. This issue requires systematic research to determine which cognitive abilities may be weakened, under what conditions, and among which groups of consumers or learners. Conversely, if interaction with large language models is to enhance cognitive capacities, the underlying mechanisms must be clearly understood.</li>



<li>New forms of educational inequality may emerge. Beyond simple access to technology, a deeper divide may develop between those who learn how to think with AI and those who merely receive outputs from it. Educational equity should therefore focus not only on access statistics but also on teaching learners how to engage cognitively and responsibly with AI systems. Reflection on this challenge has played a significant role in shaping my research trajectory and startup initiative.</li>



<li>The crisis of educational assessment and learning validity is becoming increasingly evident. Although formative and summative assessment debates predate recent developments in AI, the rise of large language models intensifies existing challenges. As definitions of knowledge, learning, and competence become less clear-cut, education systems must reconsider traditional evaluation practices. Emphasizing process-oriented assessment rather than final products may offer a more appropriate response in the coming years.</li>



<li>Finally, the redefinition of literacy and skill represents another major challenge. As future selection processes increasingly rely on learning histories and competencies, classical definitions of literacy and expertise may no longer suffice. Education and learning specialists will bear responsibility for revisiting fundamental concepts such as knowledge, literacy, and skill—a task that cannot be accomplished without systematic research.</li>
</ol>



<h2 class="wp-block-heading"><strong>Summary</strong><strong></strong></h2>



<p>In this article, I sought to present a concise analytical comparison of developments in the digital world, particularly in education, between 2025 and the emerging demands of 2026. Drawing on personal experience, academic and research activities, and a review of reputable international sources (some of which are cited in the references section), the article moves beyond descriptive reporting to identify key gaps, challenges, and possible future directions in the age of artificial intelligence. As a psychologist and educational researcher, my primary focus has been on AI’s role in education, the changing nature of learning, the evolving role of educators, and the cognitive, ethical, and educational implications of these technologies.</p>



<p>Furthermore, my studies and observations over the past three to four years—especially regarding challenges such as the weakening of independent thinking, emerging educational inequalities, the crisis of learning assessment, and the necessity of human-centered design—have led to the development of a new research-applied initiative. This initiative is currently being developed as a research-oriented startup titled ETechX-DrAtefehF, which aims to integrate theories from educational psychology and learning sciences into the design and application of AI in education, with the goal of fostering deep learning, self-regulation, and meaningful human–technology interaction.</p>



<h2 class="wp-block-heading"><strong>Resources</strong></h2>



<p>Ed-Ex – Global EdTech Trends 2025: How AI Is Reshaping Learning</p>



<p><a href="https://ed-ex.com/en/blog/global-edtech-trends-2025-how-ai-is-reshaping-learning">https://ed-ex.com/en/blog/global-edtech-trends-2025-how-ai-is-reshaping-learning</a></p>



<p>&nbsp;• Codiste – AI Trends Transforming EdTech (2025)</p>



<p><a href="https://www.codiste.com/ai-trends-transform-edtech">https://www.codiste.com/ai-trends-transform-edtech</a></p>



<p>&nbsp;• EdTech Innovation Hub – Ten EdTech Predictions for 2025</p>



<p><a href="https://www.edtechinnovationhub.com/news/starrng-ai-vr-microlearning-and-more-etihs-ten-predictions-for-edtech-in-2025">https://www.edtechinnovationhub.com/news/starrng-ai-vr-microlearning-and-more-etihs-ten-predictions-for-edtech-in-2025</a></p>



<p>&nbsp;• Vocaliv – 10 EdTech Trends to Watch in 2025</p>



<figure class="wp-block-embed is-type-wp-embed is-provider-embed wp-block-embed-embed"><div class="wp-block-embed__wrapper">
<blockquote class="wp-embedded-content" data-secret="yTZ6iKt4XQ"><a href="https://blog.vocaliv.com/10-edtech-trends-to-watch-in-2025/">10 EdTech Trends to Watch in 2025</a></blockquote><iframe class="wp-embedded-content" sandbox="allow-scripts" security="restricted" title="&#8220;10 EdTech Trends to Watch in 2025&#8221; &#8212; " src="https://blog.vocaliv.com/10-edtech-trends-to-watch-in-2025/embed/#?secret=WojVMplQKu#?secret=yTZ6iKt4XQ" data-secret="yTZ6iKt4XQ" width="600" height="338" frameborder="0" marginwidth="0" marginheight="0" scrolling="no"></iframe>
</div></figure>



<p>arXiv – Integrating Generative AI into Learning Management Systems (2025)</p>



<p><a href="https://arxiv.org/abs/2510.18026">https://arxiv.org/abs/2510.18026</a></p>



<p>&nbsp;• arXiv – Generative AI in Education: Student Skills &amp; Lecturer Roles (2025)</p>



<p><a href="https://arxiv.org/abs/2504.19673">https://arxiv.org/abs/2504.19673</a></p>



<p>&nbsp;• arXiv – Ethical Challenges of AI in STEM &amp; K–12 Education (2025)</p>



<p><a href="https://arxiv.org/abs/2510.19196">https://arxiv.org/abs/2510.19196</a></p>



<p>&nbsp;• arXiv – Accessible AI-Based Learning Tools for Special Needs (2025)</p>



<p><a href="https://arxiv.org/abs/2504.17117">https://arxiv.org/abs/2504.17117</a></p>



<p>TIME Magazine – World’s Top EdTech Companies of 2025</p>



<p><a href="https://qa.time.com/7335559/worlds-top-edtech-companies-of-2025">https://qa.time.com/7335559/worlds-top-edtech-companies-of-2025</a></p>



<p>LinkedIn News – Global vs. MENA EdTech Funding 2025</p>



<p>EU AI Act documentation &amp; implementation guidelines (2025)</p>



<figure class="wp-block-embed is-type-wp-embed is-provider-eu-artificial-intelligence-act wp-block-embed-eu-artificial-intelligence-act"><div class="wp-block-embed__wrapper">
<blockquote class="wp-embedded-content" data-secret="jhz9GSXGVH"><a href="https://artificialintelligenceact.eu/">Home</a></blockquote><iframe class="wp-embedded-content" sandbox="allow-scripts" security="restricted" title="&#8220;Home&#8221; &#8212; EU Artificial Intelligence Act" src="https://artificialintelligenceact.eu/embed/#?secret=Zf4KchMrKM#?secret=jhz9GSXGVH" data-secret="jhz9GSXGVH" width="600" height="338" frameborder="0" marginwidth="0" marginheight="0" scrolling="no"></iframe>
</div></figure>
<p>The post <a href="https://medika.life/what-2025-taught-us-and-what-2026-will-demand/">What 2025 Taught Us and What 2026 Will Demand</a> appeared first on <a href="https://medika.life">Medika Life</a>.</p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">21497</post-id>	</item>
		<item>
		<title>ETech-DrAtefehF</title>
		<link>https://medika.life/etech-dratefehf/</link>
		
		<dc:creator><![CDATA[Atefeh Ferdosipour]]></dc:creator>
		<pubDate>Thu, 04 Dec 2025 18:08:25 +0000</pubDate>
				<category><![CDATA[AI Chat GPT GenAI]]></category>
		<category><![CDATA[Digital Health]]></category>
		<category><![CDATA[Editors Choice]]></category>
		<category><![CDATA[General Health]]></category>
		<category><![CDATA[Mental Health]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[Atefeh Ferdosipour]]></category>
		<category><![CDATA[digital transformation]]></category>
		<category><![CDATA[education]]></category>
		<category><![CDATA[ETech-DrAtefehF]]></category>
		<category><![CDATA[Learning Theory]]></category>
		<category><![CDATA[LLMs]]></category>
		<guid isPermaLink="false">https://medika.life/?p=21481</guid>

					<description><![CDATA[<p>For more than three years, I have been working on a simple but powerful question: how can we design educational technology that draws inspiration from human cognitive abilities and psychological processes, instead of forcing learners to adapt to technology that does not understand them? At the same time, I have been asking how psychological and [&#8230;]</p>
<p>The post <a href="https://medika.life/etech-dratefehf/">ETech-DrAtefehF</a> appeared first on <a href="https://medika.life">Medika Life</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>For more than three years, I have been working on a simple but powerful question: how can we design educational technology that draws inspiration from human cognitive abilities and psychological processes, instead of forcing learners to adapt to technology that does not understand them? At the same time, I have been asking how psychological and educational theories can help us modernize artificial intelligence so that it can connect more meaningfully with today’s learners, who grow up surrounded by advanced technologies and constant interaction with digital systems. These questions gradually evolved into the foundation of a new idea that has shaped my current start-up initiative, ETech-DrAtefehF.</p>



<p>My earlier research in educational psychology, particularly in text comprehension, cognitive processes, and instructional design, consistently showed that learning improves when information is structured in ways that align with the human mind. Features such as cohesion, rhetorical patterns, and paragraph organization are not stylistic choices; they directly influence understanding, memory, and motivation. When educational technology ignores these principles, learning becomes shallow and exhausting. When technology respects them, learning becomes clearer and more meaningful.</p>



<p>Artificial intelligence has advanced dramatically, yet many learning systems today still focus on automation rather than understanding. They deliver content, grade assignments, or predict performance, but they rarely engage with the emotional and cognitive realities of the learner. Learning is not a mechanical transfer of information. It is a psychological journey shaped by curiosity, confusion, emotion, prior knowledge, and the need for meaning.</p>



<p>This gap between technological capability and human learning is exactly where ETech-DrAtefehF is positioned.</p>



<h2 class="wp-block-heading"><strong>A New Approach to Learning Technology</strong></h2>



<p>Instead of building yet another educational app, the goal is to create a new category of intelligent learning systems that are grounded in psychology. These systems aim to respond to the learner in real time, adapting not only to what the learner knows, but also to how the learner feels, how they process information, and how their understanding evolves moment by moment.</p>



<p>The vision includes systems that can sense when a learner is overwhelmed and adjust the pace, restructure complex ideas into simpler forms, or provide alternative examples that restore clarity. They can identify curiosity and deepen a topic intelligently. They can reorganize reading materials based on evidence-based principles so that comprehension improves without adding cognitive load. These ideas are rooted in decades of research on cognition and learning, yet AI now allows them to be implemented dynamically.</p>



<p>The theoretical foundations include the contributions of Piaget, Vygotsky, Bloom, and many other psychologists who emphasized how understanding develops, how knowledge is constructed, and how learners benefit from supportive guidance. These theories can now be integrated into adaptive learning frameworks in ways that were not technologically possible before.</p>



<h2 class="wp-block-heading"><strong>Why This Matters Today</strong></h2>



<p>Education is entering a period of global transformation. Learners in every setting, from schools to universities to professional environments, need systems that support meaningful learning rather than fast consumption of information. Artificial intelligence can play a central role in this transformation, but only if it is built on a deep understanding of human psychology.</p>



<p>ETech-DrAtefehF aims to bring together the strongest elements of learning theory, cognitive science, and human-centered AI design to create educational solutions that are both scientifically grounded and practical. These systems are designed to honor the learner’s cognitive architecture, reduce unnecessary complexity, and promote genuine understanding.</p>



<p>Across diverse learning environments, the need for such approaches is growing rapidly.</p>



<p>Educators are seeking tools that are ethical, transparent, and effective. Learners are asking for technology that supports their growth, not just their performance metrics. Institutions want systems that are scalable and adaptable to global contexts.</p>



<h2 class="wp-block-heading"><strong>An Open Invitation</strong></h2>



<p>As this initiative expands internationally, I am now entering a stage focused on building a wider community of collaboration around ETech-DrAtefehF. I welcome conversations with researchers, educators, psychologists, and AI specialists who share a belief in responsible, human-centered innovation. I am also opening discussions with global investors who recognize the long-term value of educational technology that is grounded in scientific insight rather than short-term trends.</p>



<p>My goal is to bring together partners who see the same opportunity: to create learning systems that are meaningful, ethical, and capable of supporting real human growth. If this vision resonates with you, I would be glad to exchange ideas and explore future collaboration.</p>



<p>The next generation of educational technology should not simply deliver information. It should understand learners.</p>



<p>That is the mission at ETech-DrAtefehF.</p>
<p>The post <a href="https://medika.life/etech-dratefehf/">ETech-DrAtefehF</a> appeared first on <a href="https://medika.life">Medika Life</a>.</p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">21481</post-id>	</item>
	</channel>
</rss>
