Selasa, 05 Maret 2024

Ethics of early detection of disease risk factors: A scoping review - BMC Medical Ethics - BMC Medical Ethics

Descriptive analytics

The searches resulted in N = 1201 articles from Scopus, N = 1121 articles from Embase, and N = 118 articles from Philosopher’s Index. Additionally, promising articles identified in the review’s preparational phase were added (N = 37). After removal of duplicates, N = 2034 articles were included for screening based on title and abstract. Abstract and title screening resulted in N = 112 articles that were found eligible for full-text reviewing. Full-text screening resulted in N = 48 articles fulfilling the criteria. Following the searches, we found further relevant articles (N = 7) by consulting the relevant articles’ references lists. A final sample of N = 55 was included in the analysis. See Fig. 1.

Fig. 1
figure 1

Preferred Reporting Items for Systematic Review and Meta-Analysis (PRISMA) flowchart of article screening phases

The included articles were published by authors in Europe (N = 31), the United States and Canada (N = 21), and Australia (N = 3). The articles were published between 1990 and 2021, with a peak between 2015 and 2019 (N = 18). A large part of the articles discussed the ethics of early detection of risk factors without focusing on a particular disease (N = 20) and many had a focus on mental health and neurological diseases (N = 25), followed by cancers (N = 8), nutrition (N = 2), and viral infection (N = 1). The articles discussed early detection of risk factors in the context of public health (N = 18), clinical health (N = 10), both public and clinical health (N = 15), occupational (N = 8) and forensic settings (N = 4). The majority of the included articles utilized methods common in applied ethics, including conceptual analysis and critical reflection on and engagement with the empirical literature, instead of conducting empirical research. The articles using empirical methods (N = 5) made use of focus group discussions [21, 22], interviews [23], ethnographic fieldwork [24], and expert workshops [14].

Analysis of the included articles identified eight common ethical themes: (1) Reliability and uncertainty in early detection, (2) Autonomy, (3) Privacy, (4) Beneficence and Non-maleficence, (5) Downstream burdens on others, (6) Responsibility, (7) Justice, (8) Medicalization and conceptual disruption. For an overview of these themes and the covered subthemes, see Table 1. The themes are in many ways interconnected, but for the sake of clarity will be discussed separately below. See Table 2 for the patterning chart of the main themes.

Table 1 Key ethical themes and subthemes for the early detection of disease risk factors
Table 2 Key ethical themes and patterning chart

Reliability and uncertainty in early detection

Reliability and uncertainty of early risk information are frequently discussed as important ethical considerations for detecting early disease risk factors [see Table 2, column Reliability and uncertainty]. The efficacy and accuracy of detecting the risk that is investigated are, for example, often discussed. Where diagnostic tests provide binary outcomes (a disease is present or not present), the factors detected with methods to determine and predict disease risk provide probabilistic outcomes from 0 to 100% risk for the disease to develop, where the low and high extremes are very rare for most diseases as many biological and environmental factors affect the risk score [4, 15, 25].

In risk screening tests, reliability and uncertainty are often discussed in terms of validity, i.e. sensitivity and specificity, of the test [26, 27]. Sensitivity refers to the chance that the test returns a positive result in people that are at risk (true positive rate) whereas specificity refers to a negative result in people that are not at risk (true negative rate). Tests with a low validity, therefore, return more false-positive and false-negative results. The predictive value of a test also depends on the disease prevalence in the target population. For example, if the prevalence of a risk factor is very low, even a test with high sensitivity and specificity will have a low positive predictive value. Furthermore, the analytical validity of the test does not necessarily mean clinical validity and utility, i.e. how well the test correlates with clinical responses and treatments [21]. Depending on these latter two, the number of at-risk people that need to be correctly detected and treated to prevent one person from developing the disease (“number needed to treat”) varies for different diseases and detection methods [28].

As preventive medicine is turning towards detecting and preventing more complex, multifactorial diseases, authorsFootnote 2 warn that the predictive value and reliability of the detection methods need to be carefully monitored and balanced against other aspects such as the cost-effectiveness and actionability of the risk information (see the sections on, respectively, justice and beneficence & non-maleficence). Furthermore, non-genetic risk factors, including epigenetic factors [21, 29, 30], can change over time and interactions with different environmental factors and conditions can produce different outcomes. This increases the uncertainty associated with risk predictions [4, 25, 27, 31, 32], but also provides opportunities for preventive interventions to decrease risk dispositions and modify the disease course [43].

Early detection of disease risk factors can be affected by, and play into, various biases that decrease the reliability of test results and increase uncertainty in risk predictions. Frequently mentioned is the way in which selection bias in the development and validation phases of detection methods (e.g., due to non-representative study samples) can limit the generalizability of these detection methods [33, 34]. In the other direction, the ability to detect risk factors and abnormalities in increasingly early phases can play into lead-time bias (where earlier detection leads to a mistaken sense of increased survival time), length bias (where the effectiveness of a test is overestimated because it over-identifies slow-developing, less aggressive diseases), and overdiagnosis bias (where many people are identified as high risk for a disease that will never develop during their lifetime [35]). (For overdiagnosis, also see the section on medicalization and conceptual disruption.)

To overcome potential biases and improve the predictive value and reliability of early screenings the use of big data approaches is sometimes proposed as a solution. However, authors warn that “it is not always the case that more data will lead to better predictive models” [28, p. 124] as it can also increase the complexity and degree of uncertainty by increasing the variance of the results (and thereby causing loss of precision). Moreover, it does not necessarily solve other issues discussed in this section or the thematic sections below [24, 36, 37].

Autonomy

A commonly mentioned set of issues centers around the notion of personal autonomy [see Table 2, column Autonomy], which we understand here in the broad sense of being able to “lead one’s life in a way that accords with what one genuinely cares about” [38, p. 5]. In the surveyed papers, autonomy considerations about having the capacities and opportunities to make one’s own choices are mostly discussed in the context of informed consent. There is broad consensus among authors that informed consent should contain information about the expected benefits and possible medical and psychosocial risks [39], about who can use and access the data (e.g. secondary uses by third parties [40]), and about the possibility of incidental findings that may be sensitive as they might unveil environmental and lifestyle exposures [21]. The extensive and complex nature of the information required for truly informed consent, however, requires a level of health literacy that for many individuals may not be attainable [23]. For example, worries are expressed that complex health information can overwhelm people and compromise their capacity for autonomous decision making [41], that it is difficult for many people to understand the difference between absolute and relative risk [35], and that people might have unrealistic ideas about the explanatory power of early disease risk factors [42].

Concerns about (lack of) health-related competencies are especially prominent in the surveyed literature. Dilemmas can arise when early detection takes place early in life and consent had to be given by legal proxies (parents or legal guardians). One issue here is that preventive testing in children might deprive them of their ‘right not to know’, in which case it could be preferable to postpone testing until the young person has developed sufficient competency to make their own decision. However, waiting can also deprive the same person of the opportunity to make choices that can affect their disease risk, or compromise their health and potentially the development of necessary competencies by allowing the disease to develop [13, 15, 30, 39, 43,44,45]. Competencies for informed consent in adults is mostly discussed for individuals atrisk of mental health disorders [39, 46]. Developing mental disease symptoms can increasingly compromise the required competencies such that “a fully competent and autonomous patient at the beginning of a study may progress to a point of diminished capacity and autonomy” [12, p. 7].

Another subtheme related to autonomy that several authors critically discuss is empowerment, in particular the idea that early detection can empower people to take control of their health and to plan their future. It is also discussed that there is a risk that the underlying assumption is that individuals “can (and should) be held morally responsible for their health outcomes” [47, p. 77]. As social and moral norms promoting responsibility for health can put pressure on individuals and groups to conform, several authors worry that the narrative of empowerment might compromise the voluntariness of the decision to take an early detection test [28, 29, 48,49,50,51], as well as downstream decisions about lifestyle choices [32].

Privacy

Early detection of health-related risks generates sensitive information about a person’s susceptibility to a variety of diseases [see Table 2, column Privacy]. Moreover, the information that is collected in the service of such an assessment can potentially contain indicators of a person’s (past) lifestyle and environmental exposures that also warrant protection (e.g. via epigenetic changes) [4, 21, 29]. Therefore, confidentiality of tests and results is considered an important component of protecting sensitive data and preserving individual privacy, but ensuring confidentiality becomes increasingly difficult when a broad range of data is collected and possibly shared or linked to other (public) data sources [4, 21]. Linking datasets increases the risk of identification of individuals in the datasets [4, 14, 21, 22]. Pooling or aggregating data before sharing reduces the risk of identification, but also decreases the richness of the dataset and its (clinical) utility [15].

For the informed consent procedure (also see the section Autonomy), clarity about individual privacy, data confidentiality, and data storing and sharing are frequently mentioned [13, 21, 33, 42, 44, 52, 53]. As individuals might worry about (future) disclosure of their risk information, information about potential risks to privacy and how institutions deal with potential privacy breaches are important for valid informed consent. A lack of confidentiality, or a lack of trust in confidentiality, might lead to people not participating in early detection efforts that can benefit them [54, 55]. Legislation to protect sensitive personal information can provide reassurance [33].

Some authors hold that confidentiality might be rightfully breached in certain cases, such as when parents are acting as a proxy for their child [15] or when the risk information is relevant to others as well, such as (future) caretakers and family members who potentially carry the same risk factors. However, hesitance was observed in the literature with respect to assigning a moral duty to physicians or at-risk individuals to share a risk status with relevant others as this would harm their right to autonomy [44, 45, 52].

Broad consensus was observed about the importance of protecting privacy and confidentiality against third parties such as insurers and employers. Worries exist that third parties can misuse risk information to discriminate or stigmatize individuals who have been labeled as being high risk of disease [13, 27, 30, 35, 43, 48, 56, 57]. (See also the section on Justice.)

Beneficence and non-maleficence

In healthcare, the principles of beneficence (“do as much good as possible”) and non-maleficence (“do no harm beyond what is proportionate”) are important moral guiding principles [58] for striking a positive balance between an intervention’s benefits and inflicted harms for the individual. While the principles themselves are mentioned relatively infrequently in the surveyed literature (but see [27, 35, 40]), they are implicitly present in the background of many discussions about the potential benefits and harms of the early detection of disease risks [see Table 2, column Beneficence & non-maleficence]. For example, multiple authors argue that the ‘latent period’ between detecting a risk and potential disease occurrence can also be a period of uncertainty and anxiety. They question whether early knowledge about being at heightened risk is more beneficial to an individual than spending the interim time in ‘normalcy’, especially when no preventive actions are currently available [13, 32, 39].

Most discussed are the ways in which a high-risk classification may lead to worries and anxieties for developing disease [2, 13, 22, 23, 28, 32, 44,45,46,47,48, 59] and can have negative effects on self-image [22, 31, 32, 45, 46]. Authors note that the effects on self-image might lead to depression [33] and even suicide [12, 46], although these effects are generally considered rare [49]. Another possible harmful psychological effect might be that positive test results lead to a perceived lack of control and decreased motivation for a future that “threatens to be taken away by illness”, possibly influencing important life decisions such as family planning [39, p. 6]. Knowing one is at greater risk for disease can also cause feelings of being fragile or ‘defected’ [4, 39, 54]. Such knowledge can also contribute to a self-fulfilling prophecy [12, 15, 31, 60] when the (anticipated) risk status leads to stress and anxieties that subsequently affects cognitive functioning [12], which in turn promotes risk-increasing behaviors [15].

Harmful impacts of early detection of disease risk factors are especially problematic and unjustified when results are incorrect. Authors warn that false-positive results can lead to unnecessary labeling and interventions [15, 27, 28, 39, 41, 44, 60]. Likewise, false-negative results can deprive patients from beneficial early interventions and provoke unjustified feelings of security [27, 28, 47, 60], possibly leading to the neglect of early symptoms (“They said everything was okey”) [2, p. 278]. (Also see the section Reliability and uncertainty). Advances in research methods, however, promise to increase the precision of screening tests and thereby decrease mis-categorizations [61].

In short, it is not always beneficial for individuals to participate in early detection programs and undergo (sometimes unnecessary) follow-up examinations and interventions [35]. Though transparency and truth telling by disclosing test results and possible incidental findings are valued as providing respect for autonomy, consensus within the medical community is at present against disclosure of risk information with uncertain predictive value, justified by the principle of non-maleficence [39, 49].

Many authors acknowledge that if early knowledge is to be beneficial to individuals, the screening results should be ‘actionable,’ in the sense that they present (viable) options open to individuals to change their situation and health prospects. The existence of an “effective intervention to prevent the disorder in those who are identified as being at risk” is even identified by some as a prerequisite for the ethical acceptability of early screening tests [57, p. 352], especially when it concerns children who cannot yet decide for themselves [45]. Apart from early interventions to fully prevent disease occurrence several other preventive actions are mentioned that can be considered important for beneficence, including providing reassurance when a test is negative [14, 15, 46, 49], offering support in planning one’s life for a future disease [45, 46], supporting reproductive decisions [32, 41, 43, 62, 63], and giving advice on modifying health behaviors and lifestyle [15, 32, 41, 47, 51, 64].

Promoting healthy lifestyles and health-positive behaviors are mentioned in particular as important interventions to mitigate disease risk and support beneficence [12, 29, 32, 51, 59]. The assumption that individuals can and will successfully implement the provided lifestyle and health behavior advice is, however, questioned and criticized by several authors. Social science studies indicate that changing health behaviors is difficult [28] and the lack of direct experience of symptoms, uncertainty about whether symptoms will materialize, and uncertainty about the effectiveness of changing lifestyle are mentioned as possible demotivating factors [2]. The harmful psychological effects discussed above can also be barriers to effective behavior change [2, 4]. Even when risk information effectively motivates some individuals to change health behaviors, it should not be assumed to motivate a particular individual [43]. Other barriers to adopting healthy lifestyles that are mentioned are low health literacy, low socio-economic status, and restricted access to healthcare. The implications of these inequalities between individuals and social groups are discussed in the theme on Justice.

To achieve the proposed benefits and minimize the harms, adequate communication about risks is discussed as crucial. Participants of preventive interventions should be informed about, among other things, the expected benefits and harms and the actionability of the risk information (also see the section on informed consent within the theme Autonomy). It may be difficult, however, for both professionals and laypersons to adequately grasp the difference between susceptibility and disease, and to understand probabilistic and relative risk data [35, 39, 43, 50, 59]. Some authors recommend avoiding complex medical terminology and contextualizing the provided information in relation to the patient’s situation [12, 53]. Misinterpretation of test results and unsubstantiated expectations for the explanatory power and actionability of the information (therapeutic misconception) are widely discussed as harmful implications of inadequate communication of risk information [12, 42, 46, 47, 49, 59, 63]. An example by Schermer & Richard [53, p. 143] is that “the emotional and social effects of terms chosen to communicate with lay-people can be considerable; being told one is ‘at risk’ for developing AD [Alzheimer’s disease] is different from being told one has preclinical or asymptomatic AD – although the situations these terms aim to describe may be exactly the same”. Educating patients using simple support aids [45], offering counseling [39, 43], and training healthcare professionals in patient communication are discussed as benefitting risk communication [44, 49].

Downstream burdens on others

Besides the harms and benefits of early risk factor detection for the individuals who consent to screening procedures, authors frequently mention the downstream effects that screening participation may have for, and in relation to, friends and family [see Table 2, column Downstream burdens on others]. Though these downstream effects to a certain extent relate to the bioethical principles of beneficence and non-maleficence as well (e.g., consider the social harms that may befall individuals through stigmatization; see also Justice), we discuss them separately because they also relate to broader ethical questions about how to strike a balance between diverging interests of multiple individuals.

In this context, authors mention that family and others around “at-risk individuals” might think of them “as in some sense already impaired” [43, p. 69] and treat them differently [42, 61, 63]. For example, children might be treated differently at school [39]. This does not need to be harmful per se, but authors warn that it can have adverse impacts on relationships [15], cause conflicts with the family [13, 45, 52], and contribute to possible self-fulfilling prophecies [15, 43, 53] (see also section Beneficence and Non-Maleficence).

To prevent conflicts or misunderstandings within families and relationships, providing adequate risk information and an explanation of what a disease risk means for the screened individual and relevant others is important. Multiple authors propose that family counseling should be offered when conflict is probable [39, 47, 52]. Risk information can also have direct consequences for relatives when the risk is inheritable or when parents need to take decisions on behalf of their child, for example. Interests of family or significant others can create tensions between the individual’s right to keep risk information confidential and opportunities to reduce risk for others (the principle of non-maleficence). This raises questions about the duties of the patient and his or her physician towards other persons at risk [40, 45, 52].

Responsibility

Responsibilities for health outcomes and the development and prevention of illness are discussed in the majority of the included articles [see Table 2, column Responsibility]. As was touched upon in the themes of Autonomy and Beneficence and Non-maleficence, empowerment of people to use risk information to make health decisions and manage their well-being is an important driver for the early detection of disease risk factors. Although some authors mention the possibility that detection of risk factors, especially biological factors, might lead to ascribing decreased responsibility to individuals for ill health [4, 31], most authors discuss that individual responsibilities for health and well-being are increasing due to a focus on personalized disease risks, leading to individuals also increasingly being held accountable for their illhealth [22, 24, 28, 29, 32, 47, 50, 51, 55, 59, 62]. However, emphasizing individual responsibility for health and well-being might overburden individuals and suggest they are to blame for outcomes that are not always within their control. Such lack of control can be caused by the amount and complexity of health-related information [41] or the lack of means and resources to be proactive about health [37, 42, p. 204].

Authors warn that if such responsibility shifts towards the individual occur, individuals or parents who are not acting on health information or choose not to participate in early screenings might then elicit victim blaming [2, 21, 29, 34, 56, 65], and the people that do not comply with the norm of taking individual responsibility for health prevention might become seen and treated as “irresponsible” [22, 28, 29, 37]. Norms of individual responsibility for health can also lead to feelings of guilt and self-blame when disease occurs [21, 22, 62]. According to Singh & Rose: “if biomarkers for [antisocial behavior] are found to be present during early childhood screening, then children might be subject to intrusive medical interventions that focus on individual-level risk factors rather than on social and environmental risk factors.” [42, p. 204].

An increased focus on individual risk factors can indeed obscure environmental and societal factors for illhealth that are often more structural and affect larger populations. If these factors remain unnoticed this will push away responsibility of e.g. employers, industry, and governments for these risks [21, 29, 54, 56, 60]. Worries are expressed that this also shifts the focus away from studying and analyzing collective interventions to promote health [2, 62], whereas preventing disease by intervening in harmful environmental or societal factors, e.g. environmental pollution or norms of sedentary working, is often more effective and avoids blaming individuals [23, 29, 37, 57, 59]. In the occupational context, worries are expressed that this shift towards individual responsibility for health can lead to the blaming or discriminating of ill or at-risk employees instead of taking responsibility for safe working environments [26, 48, 56, 66].

Justice

Many of the discussed considerations have implications for justice, which is here understood as the fair, equitable, and appropriate treatment of persons and fair distribution of healthcare resources [67] [see Table 2, column Justice]. As the prevalence of disease risks varies between individuals and groups of individuals, identifying individuals and groups at high risk can be used to support and target those who need help the most. However, there are concerns that the labeling of individuals or groups can also have stigmatizing and discriminatory effects [13, 14, 34, 40, 43, 51, 63, 68]. This is especially problematic in the absence of effective preventive interventions [32, 60], although it is also argued that, in the long run, preventive interventions “may reduce the severity of both security-based and shame-based stigma” [63, p. 217]. Special attention is paid to ethnic and societal groups that are currently, or historically have been, vulnerable to stigmatization and discrimination [21, 30, 39,40,41,42, 45, 54, 65, 69].

Ascribing responsibility and blame for ill health are described as a base for possible stigmatization and discrimination. This can take many forms, including social exclusion, reduced or denied access to healthcare services and insurance, and exclusion from educational institutions or jobs. Besides the detrimental effects of stigmatization and discrimination itself, fear of these effects might also generate self-stigma – causing harmful psychological effects and possible social withdrawal while the actual disease might never develop – and it might withhold people from participating in testing or screening programs that might benefit them [4, 54, 57, 68].

The distributive justice concerns that are discussed relate predominantly to inequalities in access to healthcare and opportunities to prevent disease. When early risk factors screenings are introduced, fair access to the testing and subsequent interventions should be endorsed as well as the provision of appropriate information for different societal groups. When this is not the case, health inequalities can exacerbate along socioeconomic gradients as not everyone can afford testing and follow-up healthcare, and, as within the theme of Responsibility, not everyone has the same capacities to understand risk information and take appropriate measures to protect their health [2, 14, 22, 29, 32, 37, 47, 51, 65]. The focus on individual responsibilities for health promotion and disease prevention are also argued to undermine health solidarity [22, 44].

Another concern is the fact that many biological and environmental risk factors are much more common in certain minority or otherwise disadvantaged groups than in less deprived populations [40, 65, 69]. These inequalities add to the above-mentioned inequality in access to healthcare and preventive opportunities that affect the same vulnerable and disadvantaged groups [37, 65]. Detecting risk factors in these vulnerable groups and individuals is argued to improve the understanding of what types of interventions work for different groups, thereby possibly contributing to disease prevention to achieve more health equity overall [61]. However, these risks should not be individualized in a way that harmful environmental and societal risk factors are neglected, as this could suggest that ultimately persons are themselves responsible for their disease – which could amount to victimblaming. Moreover, it could lead to diminished efforts by government and private organizations to tackle the structural social and environmental determinants underlying health inequities, possibly leading to decreased health solidarity and expanding health inequality [22, 65].

Mental health and neuropsychiatric disorders are discussed as current causes for stigma. Several authors have concerns that early detection of risks for mental health disorders will subject high-risk individuals to similar stigmatization, potentially extending to family members as well [15, 45, 52, 63]. Not only might those at high risk for mental health disorders experience stigma from the people around them, e.g. friends and family, or teachers, employers, or healthcare professionals, but they are also vulnerable to self-stigmatization and other harmful psychological effects, further increasing the risk of developing mental health pathologies [12, 39, 70]. (Also see the section Downstream harms on others.)

If tests for risk factors or knowledge about disease risk are used in occupational and insurance contexts this raises concerns about stigmatization and discrimination. Employers might use information on employees’ risk status to provide a safe work environment and protect their employees’ health. However, it can also result in a situation where employees (possibly unintentionally) favor workers who are less likely to develop illness. Testing for early disease risks in the workplace should be used to include people in the workplace by improving safe and healthy workplaces and not as a means for “selection of the fittest” [26, p. 98] or excluding people from the workplace [4, 21, 40, 43, 48, 49, 54,55,56, 59, 66, 69]. Insurers might use risk information for differentiating insurance premiums or excluding people from insurance. They might also impose pressure on people to accept tests, share their test results, or require clients to modify their lifestyles and environment based on their personal disease risks [4, 14, 44]. Regulations about privacy and confidential use of disease risk information are very important to protect people against stigmatization and discrimination. (See also the theme Privacy.)

Finally, there are concerns about the impact of early detection of disease risk on equitable and efficient use of financial resources for healthcare. Questions are raised whether screening interventions produce sufficient health benefits given the healthcare budget they require [13, 37, 41, 44, 49, 55]. Moreover, increasingly sensitive technologies will expand opportunities and needs for follow-up examinations, the monitoring of detected risks, and providing preventive treatments, therapeutic interventions, or counseling [28]. When taking into account these downstream effects, it is not obvious that early risk factor detection has a favorable cost-effectiveness ratio compared to clinical healthcare [37, 68]. Finally, commercial screening tests risk draining collective healthcare resources: individuals or companies may purchase such tests for themselves but follow-up examination and preventive treatments will subsequently be sought in the public healthcare system [23].

Medicalization and conceptual disruption

Medicalization refers to processes of “defining more and more aspects of life in relation to aims defined within the medical domain” [24, p. 28]. Multiple authors describe that increasing possibilities to detect disease risk factors in early phases can contribute to medicalizing these early risk states (e.g. [14, 24, 32]). [see Table 2, column Medicalization & conceptual disruption]. They point to an ongoing trend that early detection can reinforce widening classifications of disease due to a focus on health and risk factors. Increased attention to early risk factors can reconceptualize what is regarded as “health” and “disease” and turn (what we consider now as) healthy people into patients in need of medical attention [21, 28, 53, 64].Footnote 3 Improved understanding of the causes of disease can contribute to a blurring of the distinction between risk factors and disease indicators, thereby undermining the distinction between primary and secondary prevention and driving medicalization [32]. This may well change established views of what is normal or acceptable (e.g. in food habits) and thus interfere with sociocultural practices and values that are central in common conceptions of the good life, e.g., in relation to nutrition, lifestyle, or dignified aging [24, 47, 62].

The aim of detecting disease risk factors in an early phase to intervene and prevent disease development suggests that early disease is slumbering in everyone and must be “intercepted before it can strike” [37, p. 13]. This might cause feelings of agitation and insecurity as “feeling healthy no longer means being healthy” (23, p. 37, emphasis in the original). This reconceptualization of the meaning of being healthy reinforces assumptions that continuous (self-)monitoring is required, feeding into what David Armstrong [71] has dubbed “surveillance medicine”: the surveillance of healthy populations that dissolves the distinctions between health and illness and widens the space in which medicine operates [28, 37].

The increasing focus on early detection of risk factors and widening what is “actionable,” e.g., through innovations in medical practices and increasingly sensitive technologies for detecting biological “abnormalities,” is discussed as a driver for overdiagnosis and overtreatment [28, 36]. Overdiagnosis applies in “situations where an actual disease or risk factor is diagnosed in people who are mostly well, and where this condition will not actually come to influence future health, either because it disappears spontaneously without medical attention or remains asymptomatic until death from other causes” [28, p. 111]. Overdiagnosis and related overtreatment can be considered harmful if the disease would not have occurred anyway, if scarce healthcare resources are used, or if the medical interventions have serious physical and psychological sideeffects [28, 35, 39].

Finally, the increasing technological opportunities to obtain early risk information are thought to further contribute to medicalization and the above-discussed implications. In addition, worries were presented that commercialization of preventive health tests may lead to individuals screening themselves without proper understanding of the consequences [36], unrealistic expectations about tests’ explanatory power due to advertisements that make exaggerated promises about health protection [42, 72], increasing healthcare disparities due to inequalities in access, and decreasing health solidarity due to a further increase in individual responsibilities for health [72].

Adblock test (Why?)


https://news.google.com/rss/articles/CBMiSmh0dHBzOi8vYm1jbWVkZXRoaWNzLmJpb21lZGNlbnRyYWwuY29tL2FydGljbGVzLzEwLjExODYvczEyOTEwLTAyNC0wMTAxMi000gEA?oc=5

2024-03-05 07:41:43Z
CBMiSmh0dHBzOi8vYm1jbWVkZXRoaWNzLmJpb21lZGNlbnRyYWwuY29tL2FydGljbGVzLzEwLjExODYvczEyOTEwLTAyNC0wMTAxMi000gEA

Tidak ada komentar:

Posting Komentar