<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[Dan Sadler]]></title><description><![CDATA[Dan Sadler]]></description><link>https://www.dansadler.blog/</link><generator>Ghost 5.30</generator><lastBuildDate>Fri, 10 Apr 2026 17:16:39 GMT</lastBuildDate><atom:link href="https://www.dansadler.blog/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[Do we need to reclaim the hypothesis?]]></title><description><![CDATA[<p>Scientific knowledge grows through cycles of conjecture and criticism. But in his recent piece, <a href="https://network.febs.org/posts/thinking-like-a-scientist-part-three-hypothesis-retrofitting">Hypothesis Retrofitting</a>, Dr. Frezza warns us that big data and AI are disrupting this cycle, pulling science firmly toward data collection and away from hypothesis generation. The end result is scientific papers brimming with data yet</p>]]></description><link>https://www.dansadler.blog/do-we-need-to-reclaim-the-hypothesis/</link><guid isPermaLink="false">69b4b6b780027f1b6bac408c</guid><dc:creator><![CDATA[Daniel G. Sadler]]></dc:creator><pubDate>Sat, 14 Mar 2026 01:38:00 GMT</pubDate><media:content url="https://www.dansadler.blog/content/images/2026/03/8f0835bc3252500fb3ac4640d28edd5a5c6dff9f0a490b558a8e7e2161395460.png" medium="image"/><content:encoded><![CDATA[<img src="https://www.dansadler.blog/content/images/2026/03/8f0835bc3252500fb3ac4640d28edd5a5c6dff9f0a490b558a8e7e2161395460.png" alt="Do we need to reclaim the hypothesis?"><p>Scientific knowledge grows through cycles of conjecture and criticism. But in his recent piece, <a href="https://network.febs.org/posts/thinking-like-a-scientist-part-three-hypothesis-retrofitting">Hypothesis Retrofitting</a>, Dr. Frezza warns us that big data and AI are disrupting this cycle, pulling science firmly toward data collection and away from hypothesis generation. The end result is scientific papers brimming with data yet devoid of explanation.</p><p>Frezza is right to suggest that we risk conflating data collection for discovery. But as I read his essay, I found myself questioning some of the points he raised: that omics encourages HARKing, that we lean too heavily on omics for hypothesis generation, and that AI will accelerate the decline of scientific creativity. These are important concerns, but they are more nuanced than he describes.</p><p>Here, I&apos;ll examine these claims in turn. I hope that by identifying the real issues at hand, we will be better equipped to resolve them.</p><h2 id="exploratory-omics-needs-post-hoc-hypotheses">Exploratory omics needs post-hoc hypotheses</h2><p>Frezza warns that omics is turning us into hypothesis retrofitters at scale.</p><p>In confirmatory science, that&apos;s a problem. It&apos;s called Hypothesizing After Results are Known, or HARKing. A helpful analogy is the Texas Sharpshooter: a shooter fires bullets into a barn, then conveniently draws a target around the holes, claiming his brilliance. Post-hoc claims of this kind are much less impressive than genuinely pre-specified predictions (i.e., I&apos;m going to hit the bullseye). They&apos;re less impressive because the experiment <em>wasn&apos;t designed</em> to test them, inviting false positives and corroding the credibility of the conclusions.</p><p>But most omics isn&#x2019;t confirmatory science. It&#x2019;s exploratory.</p><p>Omics experiments generate vast datasets, revealing metabolites, transcripts, proteins and entire pathways that change with an intervention. And we <em>really</em> care about the surprises. The observations that conflict with, or aren&#x2019;t accounted for by, our best available knowledge&#x2014;because they expose a gap in our understanding. And the only way to close that gap is to construct a new hypothesis that accounts for what we didn&apos;t expect to find.</p><p>Without a new explanation, all you can say is that a particular set of molecules went up or down. But the interesting question is <em>why</em>&#x2014;which is exactly where the knowledge deficit lies.</p><p>In exploratory science then, HARKing isn&apos;t an epistemic sin. It&apos;s the mechanism by which surprise is turned into understanding&#x2014;earned through independent, confirmatory tests. The real sin comes when people present exploratory conjectures as confirmatory claims.</p><h2 id="are-we-relying-too-heavily-on-omics-for-ideas">Are we relying too heavily on omics for ideas?</h2><p>Perhaps Frezza&apos;s real concern was that we risk relying on omics too heavily for new hypotheses?</p><p>Although we tend to call omics assays hypothesis-generating, that label doesn&apos;t quite capture their essence. Rather, they&apos;re <em>problem-generating.</em> These tools don&apos;t point us toward an explanation so much as they expand the set of observations that need explaining. And this can be incredibly helpful, because <em>problems are the substrates of hypotheses</em>.</p><p>Seen this way, Frezza&apos;s concern must shift: the issue isn&apos;t over-reliance on omics for new hypotheses, but for problems.</p><p>Sure, a carefully chosen omic assay can reveal interesting problems that targeted approaches would miss, precisely because we tend to focus on familiar regions of the molecular landscape.</p><p>But outsourcing the discovery of scientific problems to omics could have severe consequences. Identifying important problems requires wide reading, conceptual synthesis, and integration of disparate evidence. As pressure for new research questions rises, it becomes tempting to defer that labour to the tools. Just spray-and-pray omics. The result is research organised around whichever signal happens to emerge, rather than around questions grounded in deep understanding and clinical relevance.</p><p>What&apos;s at risk then isn&apos;t the capacity to hypothesise. Instead, it&apos;s the habits that curate scientific problems and make hypotheses worth having: reading widely, making tenuous connections and holding half-baked ideas in tension.</p><h2 id="ai-wont-replace-your-hypotheses">AI won&apos;t replace your hypotheses</h2><p>The last worry is that large language models (LLMs) will render our creativity obsolete. But this concern mischaracterizes what LLMs do and how they do it.</p><p>These models are trained on incredibly large bodies of text and designed to produce coherent, contextually relevant responses through pattern matching&#x2014;which is why they can generate plausible sounding explanations. But we shouldn&apos;t mistake this for <em>understanding</em>. They carry no model of biological reality and no capacity to discern whether an explanation is credible or merely coherent-sounding. More fundamentally, they can&apos;t generate novel ideas, only combinations of those that were latent in their training data. We can think of them as sophisticated indexes of existing human knowledge&#x2014;useful for retrieval and recombination, but not for the kind of explanatory leap that constitutes a truly original hypothesis.</p><p>This is why Frezza&apos;s concern doesn&apos;t quite land. At its most valuable, scientific creativity involves constructing new explanations for phenomena that current frameworks fail to account for. That capacity isn&apos;t something LLMs possess.</p><p>That said, the candidate explanations LLMs produce could still be useful in practice. Through recombining ideas across fields, they may spit out hypotheses that you hadn&apos;t considered before. Most of the outputs will be wrong, trivial, and need triage. But expanding the set of potential explanations&#x2014;unconstrained by your own priors&#x2014;<em>might</em> enhance your personal creative process.</p><p>The scientist&apos;s role in an AI-assisted research environment is therefore not diminished but clarified. It is to identify real-world problems to solve, produce novel hypotheses, and judge which of them deserve to be tested. That judgment requires exactly what LLMs lack: a genuine <em>understanding</em> of the world. Which means it remains, at least for now, irreducibly ours.</p><h3 id="notes">Notes</h3><ol><li><a href="https://network.febs.org/posts/thinking-like-a-scientist-part-three-hypothesis-retrofitting">https://network.febs.org/posts/thinking-like-a-scientist-part-three-hypothesis-retrofitting</a></li></ol>]]></content:encoded></item><item><title><![CDATA[Why induction is not the foundation of science]]></title><description><![CDATA[<p>A video recently circulated on X in which Fran&#xE7;ois Jacob said non-hypothesis-driven research is &quot;very boring&quot;.</p><figure class="kg-card kg-embed-card"><blockquote class="twitter-tweet"><p lang="en" dir="ltr">&#x1F62F; Fran&#xE7;ois Jacob (co-discoverer of gene regulation)  asked by Lucy Shapiro (Stanford):<br>&quot;It&apos;s always been hypothesis-driven research [..] but a new way is looking at vast</p></blockquote></figure>]]></description><link>https://www.dansadler.blog/inductionisnotthefoundationofscience/</link><guid isPermaLink="false">6567d7a842f86449527718bc</guid><dc:creator><![CDATA[Daniel G. Sadler]]></dc:creator><pubDate>Mon, 18 Dec 2023 01:08:06 GMT</pubDate><media:content url="https://www.dansadler.blog/content/images/2023/12/DALL-E-2023-12-17-19.56.41---scientist-generating-knowledge--cubic-painting--.png" medium="image"/><content:encoded><![CDATA[<img src="https://www.dansadler.blog/content/images/2023/12/DALL-E-2023-12-17-19.56.41---scientist-generating-knowledge--cubic-painting--.png" alt="Why induction is not the foundation of science"><p>A video recently circulated on X in which Fran&#xE7;ois Jacob said non-hypothesis-driven research is &quot;very boring&quot;.</p><figure class="kg-card kg-embed-card"><blockquote class="twitter-tweet"><p lang="en" dir="ltr">&#x1F62F; Fran&#xE7;ois Jacob (co-discoverer of gene regulation)  asked by Lucy Shapiro (Stanford):<br>&quot;It&apos;s always been hypothesis-driven research [..] but a new way is looking at vast amounts of data and not asking a question, just seeing if it give you a pattern.&quot;<br>Jacob: &quot;It&#x2019;s very boring..&quot; <a href="https://t.co/mPJIp6xIfS">pic.twitter.com/mPJIp6xIfS</a></p>&#x2014; Itai Yanai (@ItaiYanai) <a href="https://twitter.com/ItaiYanai/status/1727878000903950467?ref_src=twsrc%5Etfw">November 24, 2023</a></blockquote>
<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>

</figure><p>This provoked a response from many scientists, including Dr. David Glass. Chief amongst Davids claims was that induction is the &#x201C;foundation of science&#x201D;. I disagree with this claim for several reasons, and i&apos;ll discuss them here.</p><p>I hope my criticism of these ideas helps clarify why induction is not the basis of experimental science. We must adopt an experimental framework that is consistent with the true aim of science &#x2013; <em>explanation.</em></p><p>The main critique of hypothesis testing is that guessing the outcomes of experiments causes bias. Instead of hypothesizing, Newton suggests that we should just wait for the evidence.</p><figure class="kg-card kg-embed-card"><blockquote class="twitter-tweet"><p lang="en" dir="ltr">One of the most famous quotes of all time: Isaac Newton&apos;s &quot;Hypotheses Non Fingo&quot; (I don&apos;t make hypotheses). He was asked to explain the cause of gravity, for which he had no data. So he pointed out that one should never guess, but wait for experimental evidence.</p>&#x2014; David J Glass MD (@davidjglassMD) <a href="https://twitter.com/davidjglassMD/status/1728967545691296066?ref_src=twsrc%5Etfw">November 27, 2023</a></blockquote>
<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>

</figure><p>Newton&apos;s advice raises questions: How long should we wait, exactly? And where do we look for said evidence?</p><p>Experiments are planned activities, where we put a <em>theory</em> to the test. Science progresses by us putting forward explanatory theories that we think will solve gaps in our knowledge. These theories are bold guesses &#x2013; they are <em>conjectural</em>. And they are integral to the scientific process.</p><p>In the absence of a working hypothesis, what on earth would the point of an experiment be?</p><p>Dr. Glass also claimed that it is ok for us to theorize, so long as hypotheses are abandoned in favor of questions when we execute experiments.</p><p>This seems farcical. It&apos;s erroneous to suggest that theory can guide our experiments but simply be disregarded to avoid bias in data interpretation. And no data can be understood without invoking a significant explanatory framework anyway.</p><p>Newtons advice insinuates that we can derive knowledge from observations, in a process called inductive reasoning, of which David is an advocate. This method suggests that scientific progress is made by making general conclusions from a set of specific observations. It assumes that observed patterns will extend to unobserved instances.</p><p>Yet induction is a seriously flawed method of scientific reasoning. I&#x2019;ll outline two of its main limitations here.</p><figure class="kg-card kg-embed-card"><blockquote class="twitter-tweet"><p lang="en" dir="ltr">Inductive reasoning is still the most useful framework for doing science - one seeks to learn from past experience (data) to make predictions about what will happen in the future (via repetition or generalization to other systems).... which can also be though of as model-testing.</p>&#x2014; David J Glass MD (@davidjglassMD) <a href="https://twitter.com/davidjglassMD/status/1729195958536396875?ref_src=twsrc%5Etfw">November 27, 2023</a></blockquote>
<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>

</figure><p>First, induction cannot be logically justified.</p><p>Philosopher David Hume realized that the future does not <em>necessarily</em> resemble the past. To argue otherwise would be circular.</p><p>This means that we can&apos;t justify the truth of theories (e.g., &#x201C;all swans are white&#x201D;) from the truth of basic statements verified by experience (e.g., &#x201C;all swans in my local park are white&#x201D;).</p><p>How then <em>do</em> we justify any conclusions beyond past instances of which we have had experience?</p><p>The answer is that we simply can&#x2019;t. There are no criteria at our disposal to establish the absolute truth of a theory.</p><p>But we can validly assert the <em>falsity </em>of a theory by observation; if i observe one black swan, it refutes the theory that all swans are white. Our ability to disprove theories by statements verified by experience is why falsifiability is key for scientific progress. And it is falsifiability which the hypothesis framework, and its underlying philosophy of critical rationalism, hinges upon.</p><p>Another limitation of induction is that it views prediction as the main aim of science.</p><p>The inductivists goal is to forecast the outcomes of future experiments off the back of repeated observations. This focus on predictability is odd. Being able to predict the outcome of an observation is not the same as understanding it. The latter depends on having a good explanation.</p><p>Our best scientific theories explain the very reality underlying our observations, whilst also containing accurate predictions. But &quot;theories&quot; produced by induction don&apos;t take this form.</p><p>Pursuing prediction is misguided, but there is more. The act of predicting the results of experiments, and extrapolating these findings, demands prior explanatory theories. How else could we account for the uncertainty in things we haven&apos;t observed? From this, it&apos;s obvious why Popper described induction as an illusion.</p><figure class="kg-card kg-image-card"><img src="https://www.dansadler.blog/content/images/2023/12/Screenshot-2023-12-17-at-8.06.19-PM.png" class="kg-image" alt="Why induction is not the foundation of science" loading="lazy" width="606" height="382" srcset="https://www.dansadler.blog/content/images/size/w600/2023/12/Screenshot-2023-12-17-at-8.06.19-PM.png 600w, https://www.dansadler.blog/content/images/2023/12/Screenshot-2023-12-17-at-8.06.19-PM.png 606w"></figure><p>I&apos;ll end by granting David that our choice of experimental framing matters. It guides our scientific practice. </p><p>But it&apos;s induction that poses a greater threat to science than critical rationalism.</p><p>Scientists working under the guise of induction are encouraged to verify &#x2013; rather than falsify &#x2013; their predictions, whilst shielding their inexplicit theories from criticism. This is not conducive to knowledge growth.</p><p>Fortunately, we have an alternative: the hypothesis framework and its philosophy of critical rationalism. It can be summarized as boldly proposing new theories, trying our very best to disprove them, and only cautiously accepting those that survive our most severe criticism. </p><p></p>]]></content:encoded></item><item><title><![CDATA[Do models need to explain?]]></title><description><![CDATA[<p>Models are constructs of reality. They condense knowledge in order to help us understand reality and support our ideas.</p><p>One way that we can test the validity of a model is to perform an experiment. For instance, if a model accurately predicts an experiment outcome, it is verified. But if</p>]]></description><link>https://www.dansadler.blog/theory-or-models/</link><guid isPermaLink="false">63ddd00f42f8644952770512</guid><dc:creator><![CDATA[Daniel G. Sadler]]></dc:creator><pubDate>Sun, 12 Feb 2023 03:55:01 GMT</pubDate><media:content url="https://www.dansadler.blog/content/images/2023/02/shubham-dhage-gC_aoAjQl2Q-unsplash.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://www.dansadler.blog/content/images/2023/02/shubham-dhage-gC_aoAjQl2Q-unsplash.jpg" alt="Do models need to explain?"><p>Models are constructs of reality. They condense knowledge in order to help us understand reality and support our ideas.</p><p>One way that we can test the validity of a model is to perform an experiment. For instance, if a model accurately predicts an experiment outcome, it is verified. But if a model makes a false prediction, it signals that the model is flawed and needs to be revised.</p><p>The way you build and use a model depends on your choice of experimental framework. I&apos;ve previously discussed such frameworks <a href="https://t.co/jHZ1L5xMz8">here</a>. </p><p>Under the hypothesis framework, models are built on explanatory theories. Whereas models built with the question framework rely exclusively upon predictions. Let&apos;s consider how models fit into these frameworks a little further. </p><p>Models aren&apos;t a central part of the hypothesis framework. This is because this approach recognizes that the primary aim of science is explanation. And so experiments are tools by which we subject hypotheses to falsification. They are not used to confirm our predictions. Because of this focus on error correction, there is little need to construct models for verification &#x2014; though they may help us to communicate or comprehend complex mechanisms.</p><p>Even though hypothesis testing focuses on falsification of explanatory theories, it doesn&apos;t mean that prediction and reproducability are unimportant. It&apos;s just that the hypothesis framework views prediction as supplementary to explanation. Of course, our best theories explain the world <em>and</em> make accurate predictions at the same time. And it&apos;s also vital that we can reproduce experimental results. We shouldn&apos;t confuse the need for explanatory theories with a disregard for reproducability. </p><p>Critically, models built under the hypothesis framework consist of <em>explanatory</em> theories.</p><p>On the other hand, models are central to the question framework. With this method, one gathers data in response to a question and builds a model from it. This model is then verified by testing its predictions. Because it is based on inductivism, the model is built solely on observations. Yes, the model is explanationless. </p><p>Explanationless models are problematic. The idea that theories or models needn&apos;t do more than make predictions is called instrumentalism. In this view, the only way in which we can determine the truth of a theory is the extent to which it predicts experiment outcomes. This position opposes realism &#x2014; the true idea that scientific theories can reliably predict <em>and</em> explain reality. </p><p>Some may argue it&apos;s superfluous to ask models or theories to explain when they can adequately predict. For practical matters, our ability to say that a certain treatment will reliably save someones life, for example, is arguably all that matters. Do we really need to understand the <em>why</em>? I think we must. </p><p>Keeping with the life-saving treatment example, if we can explain how a certain treatment works, then we have a clear rationale to test it in other situations, or even try to develop other treatments with greater efficacy and less side effects, since we can explain its mode of action. On what grounds could we do this otherwise? We certainly wouldn&apos;t be justified to do so on prediction alone.</p><blockquote>&#x201C;There is nothing more practical than a good theory&#x201D; (Lewin, 1952)</blockquote><p>Another issue with explanationless models relates to verification. Let&apos;s imagine that we&apos;ve tested a model built from our prior data. And the experiment outcome conflicts with the predictions made by our model. What would we do next? </p><p>According to the approach, we would simply change the model. But how exactly? The model we made was built on prior observations. There is a dilemma here. And it speaks to a critique of inductivism &#x2014; the future does not always resemble the past. </p><p>This is why we need models consisting of explanatory theories. For they account for our observations, which by themselves, tell us nothing.</p>]]></content:encoded></item><item><title><![CDATA[Defending the hypothesis as a framework for scientific experimentation]]></title><description><![CDATA[<p>Scientific experiments help us learn more about reality.</p><p>The design and interpretation of experiments is guided by specific frameworks, each with its own underlying philosophy. But most scientists aren&apos;t taught this. It&apos;s vital we understand experimental frameworks so that we can be truly aligned with knowledge</p>]]></description><link>https://www.dansadler.blog/frameworks-scientific-experimentation/</link><guid isPermaLink="false">6334f61ac5a48343282a561b</guid><dc:creator><![CDATA[Daniel G. Sadler]]></dc:creator><pubDate>Sun, 29 Jan 2023 01:41:07 GMT</pubDate><media:content url="https://www.dansadler.blog/content/images/2023/12/brano-Mm1VIPqd0OA-unsplash-2.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://www.dansadler.blog/content/images/2023/12/brano-Mm1VIPqd0OA-unsplash-2.jpg" alt="Defending the hypothesis as a framework for scientific experimentation"><p>Scientific experiments help us learn more about reality.</p><p>The design and interpretation of experiments is guided by specific frameworks, each with its own underlying philosophy. But most scientists aren&apos;t taught this. It&apos;s vital we understand experimental frameworks so that we can be truly aligned with knowledge growth.</p><p>Here, I&apos;ll cover two frameworks: the hypothesis and the question. And I&apos;ll argue that the hypothesis is the ideal framework for experimental science. &#xA0;</p><h3 id="frameworks-for-experimentation">Frameworks for experimentation</h3><p>Scientists often use hypotheses to frame experiments. A hypothesis is simply a statement of the expected outcome of an experimental test. And based on the results of a carefully designed experiment, we accept or reject our hypotheses, generating knowledge as a result.</p><p>The hypothesis framework has been criticized for its potential to cause confirmation bias <sup><a href="https://doi.org/10.1016/j.cell.2008.07.033">1</a></sup>. This occurs when researchers anticipate a specific outcome and may selectively report or analyze data in a way that supports their hypothesis, leading to poor, irreproducible science.</p><p>It has also been argued that the hypothesis is incompatible with experiments that generate large amounts of data. For example, exploratory experiments that use &apos;Omic&apos; technologies, such as proteomics, transcriptomics or metabolomics, cannot practicly be framed with hypotheses. Rather, these so-called &apos;fishing expeditions&apos; are hypothesis-generating in nature <sup><a href="http://grants.nih.gov/grants/guide/rfa-files/RFA-DA-10-014.html">2</a></sup>, meaning that follow-up work is needed to test hypotheses formed from the initial data gathered. </p><p>To address issues of bias, some scientists have turned to initiatives within the scope of open science, such as registered reports and preregistration, which aim to improve the openness, integrity, and reproducibility of research. These strategies have proven to be effective so far <sup><a href="https://doi.org/10.3233/ISU-170861">3</a></sup>, but they haven&apos;t been fully adopted by the scientific community. </p><p>Another proposed solution to the hypothesis is the question framework <a href="https://doi.org/10.1373/clinchem.2010.144477"><sup>4</sup></a>. Here, an experiment starts with a question, which is asked in order to derive data that can be used to build a model. And the model is intended to be held up for verification &#x2014; its value apparently lying in its predictive power.</p><p>Because experiment outcomes aren&apos;t specified with a question, the scientist is somewhat operating in a state of ignorance. This may help to reduce bias. Moreover, using a question is arguably more practical when exploratory research is being performed with Omic-technologies.</p><h3 id="philosophies-of-experimental-frameworks">Philosophies of experimental frameworks</h3><p>The question framework relies on a philosophical approach called inductivism: the idea that scientific theories are derived from observations. According to this approach, we can form general conclusions about the world based on specific observations, and as the number of affirming observations increases, our theories become more justified. </p><p>Let&apos;s use a basic example to illustrate inductive reasoning. Imagine that you see a white swan for the first time. As time passes, you continue to see white swans everywhere you go. Having accumulated many observations of this kind, you generalize that &quot;all swans are white&quot;. And you become more confident in your theory each time you see a white swan.</p><p>But inductivism has several flaws. For example, it wrongly assumes we can derive theories directly from observations. There&apos;s a logical gap that cannot be bridged here; we can&apos;t deduce that observations made under specific conditions will hold true in other similar situations.</p><p>Inductivism also mistakenly views prediction as the aim of science. Although our best theories contain accurate predictions, their predictive power is only secondary to their explanatory content. The generalized predictions (so-called theories) made via induction aren&apos;t analogous to new scientific theories, because they don&apos;t contain explanations that answer why and how observations come to be. Indeed, it&apos;s the ability of scientific theories to<em> explain</em> the physical world that is fundamental <sup><a href="https://doi.org/10.1016/j.shpsb.2016.06.001">5</a></sup>. Yet under inductivism, two or more theories with different explanatory contents, that make the same valid predictions, are considered just as good as each other. Clearly, these theories would have vastly different utility in the real world (as was emphasized by Deutsch in the Fabric of Reality).</p><blockquote>&quot;Suppose that one day the farmer starts bringing the chickens more food than usual. How one extrapolates this new set of observations to predict the farmer&apos;s future behaviour depends entirely on how one explains it. According to the benevolent-farmer theory, it is evidence that the farmer&apos;s benevolence towards chickens has increased, and that therefore the chickens have even less to worry about than before. But according to the fattening-up theory, the behaviour is ominous &#x2014; it is evidence that slaughter is imminent.&quot; The Fabric of Reality.</blockquote><p>Although prediction isn&apos;t the purpose of science, it&apos;s a central component of the scientific method. The outcomes of experimental tests help us choose between two competing theories, which ultimately depend upon predictions made by the theories. If the predictions of a theory don&apos;t come true then we reject it. This is the main source of error in thinking that there is nothing more to a scientific theory than its predictions. </p><p>Aside from philosophy, another critique of the question framework is that it can&apos;t completely protect us from bias. We would only ever frame an experiment with a question if our current understanding or explanations seemed inadequate in the first place. This means that our pre-existing knowledge and assumptions would influence our judgement of the experimental outcomes, regardless of the method used to frame the experiment. </p><p>Unlike the question, the hypothesis is built on critical rationalism, which is a theory of knowledge that emphasizes criticism <a href="https://doi.org/10.4324/9780203994627"><sup>6</sup></a>. This approach seeks to falsify hypotheses through experimental tests. Falsified theories are rejected as poor explanations, whereas those that survive criticism are considered better approximations of the truth. Here, there is an asymmetry between our ability to falsify and support scientific theories. While no number of experiments can prove a theory (though we can tentatively accept the best available explanation until it is superseded), a single reproducible experiment can refute one. For example, no matter how many times you observe a white swan, you can never confirm your theory that all swans are white. But observing one black swan can refute your theory.</p><p>Admittedly, it&apos;s frustrating to falsify our grand hypotheses. But as critical rationalists, we needn&apos;t waste precious resources trying to confirm our hypotheses. Instead, we can create new, more refined theories and subject them to further criticism. This is the beauty of critical rationalism. It encourages us to improve existing theories in a never-ending cycle of conjecture and criticism. </p><p>Observation is important under critical rationalism, but unlike inductivism, it doesn&apos;t serve as a source of new theories. Rather, observation helps us criticize theories. In other words, it helps us eliminate one of two theories subjected to a crucial experimental test. </p><p>This raises an important question. What is the source of scientific theories if not observation? The answer is human creativity. As scientists, we make bold guesses &#x2014; &#xA0;conjectures &#x2014; in response to real-world problems. And problems can arise when our observations or predictions conflict with our expectations and reveal deficiencies in our knowledge.</p><h3 id="a-framework-aligned-with-the-aim-of-science">A framework aligned with the aim of science</h3><p>Given the opposing views of critical rationalism and inductivism, we must ask ourselves: which experimental framework is better aligned with the aim of science? </p><p>I&apos;d argue that the hypothesis framework &#x2014; championing falsification and seeking good explanatory theories &#x2014; is fundamental for experimental science. Whilst a question may be preferrable for exploratory research intent on producing vast datasets (hypothesis-generating), the hypothesis should be the cornerstone of the scientific method.</p><p>But we must acknowledge the philosophy of hypothesis testing. Hypotheses are designed to be falsifed. Not confirmed or verified.</p><p>To be truly aligned with the aim of science, we must strive to explain the world through conjecture and criticism.</p><h3 id="summary">Summary</h3><p>The hypothesis and its underlying philosophy of critical rationalism should be the bedrock of experimental science.</p>]]></content:encoded></item></channel></rss>