Which of the following methods of reporting results by a clinical laboratory abides by the recommended?

One of the biggest clinical laboratories at large hospitals is the pathology lab, which studies body fluids and tissues to aid in diagnostic determinations.

From: Advances in Clinical Chemistry, 2020

Food Poisoning Outbreaks

B. Miller, S.H.W. Notermans, in Encyclopedia of Food Microbiology (Second Edition), 2014

Surveillance of Foodborne Infections

Clinical laboratories routinely identify pathogenic organisms that may be foodborne by testing clinical specimens, such as blood or stool, from patients. The regular reporting of the isolation of specific pathogens provides an important source of surveillance data. However, laboratory-based surveillance is dependent on an infrastructure of competent laboratories that provide routine diagnostic services, and often it also requires a central reference laboratory that can confirm the identity of unusual isolates and provide quality assurance. Follow-up studies of cases identified through laboratory diagnostics provide additional epidemiological data, including information on possible sources of the infection and on whether the cases are sporadic or associated with other cases. Clinical laboratories are moving away from culture-based methods toward rapid testing methods that do not require organism isolation. This shift in clinical testing methods may mean that public health surveillance laboratories need to find alternative methods to identify genetic subtypes for some pathogens or risk losing a valuable surveillance tool.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780123847300001282

A practical guide to validation and verification of analytical methods in the clinical laboratory

Joachim Pum, in Advances in Clinical Chemistry, 2019

4.3.1 Calculation of LoQ

CLSI recommends using at least two reagent lots with a minimum of 36 sample replicates per reagent lot to determine the LoQ [59]. The experimental design is identical to the precision profile approach described in Section 9.5.2.2, the only difference being, that bias estimates must be available for the samples, in order to calculate TE estimates. Observed TEs are then plotted (on the y-axis) against sample analyte concentrations (on the x-axis) and fit with a suitable linear or polynomial model. Using the best-fit model, the concentration corresponding to the TE goal is calculated and this is reported as the LoQ (Fig. 8).

Which of the following methods of reporting results by a clinical laboratory abides by the recommended?

Fig. 8. Limit of quantitation (LoQ). In this case the LoQ is the interpolated concentration, at which TE(%) = 20%. As the sample concentration drops, TE(%) increases.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/S006524231930006X

Healthcare technology basics

Samantha Jacques PhD, FACHE, Barbara Christe PhD, in Introduction to Clinical Engineering, 2020

Clinical laboratory

The clinical laboratory of a hospital utilizes samples of fluids or tissues from patients to identify evidence of disease or medical conditions. The space is organized into divisions such as anatomic pathology, clinical chemistry, hematology, genetics, microbiology, phlebotomy, and the blood bank. Some hospitals also have a reproductive biology testing division or a blood donor center that may or may not fall under the laboratory. Each section of the laboratory has specialized equipment and analyzers to conduct tests on blood and other specimens. Technology includes general equipment like microscopes, centrifuges, slide strainers, heaters, incubators, shakers, and tissue preparation devices. Specialty equipment can include apheresis machines, chemistry analyzers, hematology analyzers, electron microscopes, cell counters, and automated specimen processing lines. Many devices are highly complex and automated, processing samples, adding reagents, and making measurements in complex ways. Most laboratory instruments interface with a laboratory information system that receives the test results and sends them to the patient’s medical record.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128181034000028

Translating Pharmacogenomic Research to Therapeutic Potentials (Bench to Bedside)

Ann M. Moyer, Pedro J. Caraballo, in Reference Module in Biomedical Sciences, 2021

5.1 Regulatory basics

In the United States, clinical laboratories are subject to many regulatory requirements. The Clinical Laboratory Improvement Amendments (CLIA) are laws that clinical laboratories must abide by, which are enforced by the Center for Medicare and Medicaid Services (CMS). The goal of CLIA is to improve the performance of clinical testing to protect patients. CLIA includes requirements for verifying or establishing performance characteristics of tests, reporting standards, monitoring performance through quality control and proficiency testing, and ensuring educational requirements including training and competency are met. In order to perform clinical testing, laboratories must obtain a CLIA certificate by meeting a set of standards. Under CLIA, laboratory testing is classified by the complexity of the test, and the requirements are based the complexity. Most pharmacogenomic testing is categorized as high complexity. CMS approves accreditation organizations to inspect and regulate laboratories and ensure CLIA is followed. Accreditation agencies may have requirements that are more rigorous than those outlined by CLIA. The College of American Pathologists (CAP) is one organization that accredits many clinical laboratories in the US. All clinical laboratories in the US must be CLIA-certified, and many are CAP-accredited. As part of the CLIA requirements, all laboratories must participate in some form of proficiency testing. CAP and other organizations offer proficiency testing programs. While the tests offered vary among laboratories in terms of the alleles/variants and genes included, proficiency testing data demonstrates that most laboratories perform and report results with accuracy (Moyer et al., 2020; Wu, 2013).

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128204726001456

AI applications in diagnostic technologies and services

Louis J. Catania, in Foundations of Artificial Intelligence in Healthcare and Bioscience, 2021

5.1.2.1 AI’s influence on laboratory testing

The clinical laboratory in healthcare has been among the earliest entities to adopt robotics and algorithms into its workflow. AI technologies known as “Expert Systems” (see Chapter 3, page 53) introduced knowledge-based systems that provide sequential laboratory testing and interpretation as early as 1984 [111]. Expert systems don’t include the ability to learn by themselves, but instead, make decisions based on the accumulated knowledge with which they are programmed.

Computational Pathology applies to computational models, machine learning, and visualizations to make lab output both more useful and easily understood for the clinical decision-maker. Computational pathology has clinical value in all aspects of medicine via a focus on computational methods that incorporate clinical pathology, anatomic pathology (including digital imaging), and molecular/genomic pathology datasets (more below under “Genetic Testing”).

Continuous remote sensing of patients using “wearables” such as glucose monitoring devices, oximetry, temperature, heart rate, and respiratory rate monitors connected to a central computing device via the “Internet of Things” (IoT) will be the norm. AI-enhanced microfluidics and compact small interactive point-of-care-testing (POCT) labs are set to alter the way diagnostics are carried out. An example is the “Maverick Detection System” from Genalyte [112]. Biological probes bound on silicon biosensors chips bind macromolecules in the serum. The binding is detected by a change in light resonance, which is determined photometrically. They plan to detect up to 128 analytes (substances in the serum) using disposable chips from a single sample.

Today’s clinical labs are already using advanced robotics to test minute volumes of blood, serum, and other body fluids from thousands of samples in a day. They give highly accurate and reproducible answers to clinical questions, in scales almost too complicated for humans to duplicate. These machines are driven by conventional algorithmic programs, which represent and use data. They iterate repetitively and exhaustively using a decision sequence, using mathematics and equations, finally presenting a number or result within confidence limits.

In the future, robots used in the clinical laboratory will be heuristic (self-learning), using Bayesian logic and inferential processes, with numerous ways to derive the best decision possible, even allowing for missing information. Artificial Intelligence programs combined with databases, data mining, statistics, mathematical modeling, pattern recognition, computer vision, natural language processing, mixed reality, and ambient computing will change the way laboratories generate and display clinical information in the future.

AI and machine learning software are beginning to integrate themselves as tools for efficiency and accuracy within pathology. Software is being developed by start-ups, often in tandem with prominent educational institutions or large hospital research laboratories, addressing different diseases and conditions. A review of the functionalities of AI and machine learning software in the field of pathology reveal predominant usage in whole slide imaging analysis and diagnosis, tumor tissue genomics and its correlation to therapy, and finally companion diagnostic devices. The ICU (Intensive Care Unit) of the future will have AI programs, which will concurrently evaluate the continuous streams of data from multiple monitors and data collection devices. The programs will pool their information and present a comprehensive picture of the patient’s health to doctors autonomously adjusting equipment settings to keep the patient in optimal condition [113].

ML offers significant potential to improve the quality of laboratory medicine. ML-based algorithms in commercial and research-driven applications have demonstrated promising results. Laboratory medicine professionals will need to understand what can be done reliably with the technology, what the pitfalls are, and to establish what constitutes best practices as ML models are introduced into clinical workflows [114].

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128244777000055

Francisella, Brucella and Pasteurella

Beatriz Plata Barril, in Encyclopedia of Infection and Immunity, 2022

Antibiotic susceptibility testing and treatment

The Clinical Laboratory Standards Institute (CLSI) establish that the antibiotic susceptibility testing of Francisella tularensis strains should be performed using Mueller-Hinton broth enriched with 2% defined growth supplement. The pH must be adjusted to 7.1 ± 0.1 after addition of growth supplement. The inoculum should be calibrated at 5 × 105 CFU/mL of final concentration. Incubation in 5% CO2 enriched atmosphere may lead to acidification of the medium and overestimation of aminoglycoside and macrolide MICs, or underestimation of tetracyclines MICs (Caspar and Maurin, 2017).

The European Committee on Antimicrobial Susceptibility Testing (EUCAST) does not have antibiotic break-points nor recommendations on antibiotic susceptibility testing for Francisella spp.

Several agar media were used for MIC determination using the E-test strip method, appearing to be a convenient alternative to the broth microdilution method, however it has not been standardized (Caspar and Maurin, 2017).

Antimicrobials with well-established clinical efficacy include aminoglycosides, tetracyclines, fluoroquinolones and chloramphenicol. Resistance to these drugs has not been reported. Among antibiotic recommended for first-line treatment of tularemia, ciprofloxacin displayed the lowest MICs ranges. The other most active fluoroquinolone is levofloxacin. Streptomycin is the preferred aminoglycoside because its high efficacy, however, gentamicin is more available and has less vestibular toxicity. Chloramphenicol use is restricted to meningitis, because of potential bone marrow toxicity. Beta-lactams have been associated with clinical failure. Two β-lactamase genes (bla1 and bla2) have been found in the Live Vaccine Strain (LVS). A class A β-lactamase (FTU-1) is present in, at least, 14 strains of F. tularensis subspecies and corresponds to the bla2LVSgene. There has been other β-lactam resistance mechanisms described that conferred resistance to all β-lactams, therefore, these antimicrobials are not recommended to treat tularemia (Penn, n.d.; Versalovic et al., 2011; Caspar and Maurin, 2017).

Antimicrobial susceptibility testing of F. tularensis is not usually performed in clinical microbiology laboratories because of safety concerns and because resistance to antibiotics used for tularemia treatment has not been reported (Versalovic et al., 2011).

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128187319001014

Identifying and Reducing Potentially Wrong Immunoassay Results Even When Plausible and “Not-Unreasonable”

Adel A.A. Ismail, in Advances in Clinical Chemistry, 2014

1.1 Conventional statistics: An über-used paradigm in laboratory medicine

The clinical laboratory can produce a large amount of numerical data (e.g., laboratory tests, physiological investigations) for diagnostic purposes, monitoring of therapy, and/or for research. Conventional statistics (also known as frequentist and classical statistics) are widely known among laboratorians, being the dominant approach taught and used for assessing clinical/scientific data. Conventional statistics are well developed, mathematically powerful, and suite repeatable “homogeneous” events such as reference ranges/parameters in healthy individuals’ versus those with illnesses or on medications using methodologies such as contingency tables/predictive values or receiver operating characteristic plots [3–5]. The statistical methods most commonly used are parametric because they involve estimating the value and variations of “a parameter” under consideration. Its relies on the statistical “law of errors” which was mathematically developed by the greatest German Mathematician Carl Fredrick Gauss as a continuous density probability distribution known as “Gaussian,” bell or normal distribution because it is commonly encountered in practice (known as Gaussian syndrome). Parameters that are skewed and not-normally distributed may be mathematically transformed (e.g., using natural logarithm) in order to eliminate skewness and kurtosis (flatness or peakedness of the curve near the mean of distribution) before statistical computation [6]. Examples of commonly used parametric statistics are the mean ± SD, unpaired and paired “t-test,” analysis of variance (ANOVA), Pearson correlation, linear and nonlinear regression analyses, Chi-square, and Cochrane Q tests. Data that do not make assumption about population distribution or severely skewed or kurtosed may use procedures based on ranking of data which is computed without reference to specific parameter(s), i.e., distribution-free methods or nonparametric. Common examples are the median ± interquartile, Wilcoxon, Mann–Whitney, and Kruskal–Wallis for group differences, Spearman correlation and nonparametric regression, and Friedman tests. Calculation methods [5] and statistical packages are readily available for all parametric and nonparametric methodologies. The “central limit theorem” explains why deviations from Gaussian distribution can be accommodated so long as the data are “large enough.” Nonparametric tests also work well when applied to samples from Gaussian's distributed population, albeit being slightly less powerful than parametric tests, thus causing small exaggeration of differences and accordingly their calculable significance.

Central and fundamental to conventional statistics is the “Null hypothesis” coined and introduced by the English geneticist R.A. Fisher and developed further by two Americans, J. Neyman and E. Pearson. An important feature of the “null hypothesis” is that it lacks axiomatic basis because its starting point is the presumption that no variation/difference exists between sets of data under consideration. When the “Null hypothesis” is rejected (i.e., variation is found) its inference is expressed as probabilities (“p” value) with confidence intervals (CI) which “encompass the true values” with high probability. It is, however, less appreciated that arbitrary translation of “p-value” into probability of the null hypothesis has produced misconceptions and mistakes in assessing uncertainty [7]. For example, a highly significant “p-value” is usually taken as probability of the tested hypothesis being “beyond a reasonable doubt.” Furthermore, the definition of CI is widely considered to represent a range of values that contain the true value of the parameter under consideration. Such entrenched misconceptions and misinterpretations of p-values and CI are basic examples not uncommon among clinical scientists, which professional statisticians have constantly criticized in statistical literature [7,8]. Further epistemological discussion regarding such issues is outside the scope of this note and can be found in literatures and text books on conventional statistics.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128014011000074

Overview of Laboratory Testing and Clinical Presentations of Complement Deficiencies and Dysregulation

A. Frazer-Abel, ... M.A.V. Willrich, in Advances in Clinical Chemistry, 2016

6.2 Postanalytical Challenges

In the clinical laboratory, proficiency testing is a mandatory quality assurance activity for all analytes. However, the availability of external commercial programs and materials for complement testing is limited. Most analytes are not regulated by proficiency testing agencies, such as the College of American Pathologists, or the FDA and a wide range of assays are available, most of them as laboratory developed tests. A majority of laboratories implement alternative assessment of performance for proficiency testing using blinded or split samples exchanged between institutions. Nevertheless there is an international effort to address this issue. The International Complement Society (ICS, www.complement.org) under the International Union or Immunological Societies have produced a standard serum that is being utilized both for a proficiency testing under INSTAND (Society for Promoting Quality Assurance in Medical Laboratories e.V.), and there are also hopes that the efforts will yield a standard for calibration of complement assays. There are still shortcomings, because the assays are not standardized or harmonized, which means the reference intervals and performance between methods are often not interchangeable across laboratories. A standardization committee has been formed, once again with support of the ICS, to homogenize tests and provides a performance comparison across laboratories from different countries, but the processes needed in place to achieve equivalent performance are still underway.

The lack of standardization in complement testing across laboratories is a significant confounding factor. Because of the complexity of testing and relationship to clinical diseases, several tests are needed for a comprehensive interpretation of complement function. A panel approach is usually required to isolate a complement disorder within the CP or AP. Examples of such panels can be found from a number of clinical laboratories that specialize in complement or specific complement-related diseases.

In both complement deficiencies and complement dysregulation, activity assays will provide an estimate of the overall complement function, and paired quantitation of the individual complement components and regulatory proteins will aid in providing information regarding protein synthesis and abundance in circulation. With that information in hand, it is possible to then select further testing for the study of the gene involved in a given deficiency, autoantibodies against complement factors or gene variants associated with acquired complement dysregulation conditions.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/S006524231630035X

Nanomedical Devices

Zoraida P. Aguilar, in Nanomaterials for Medical Applications, 2013

6.2.2 Nanocameras

Unlike conventional clinical laboratory setup, medical robots like medical micro machines that are implanted or ingested can continuously gather diagnostic information and fine-tune the mode of treatments continuously over an extended period of time.8 Some current examples are pill-sized cameras to view the digestive tract as well as implanted glucose and bone growth monitors to aid in the treatment of diabetes and joint replacements. The capabilities of micro machines are significantly extended to stand-alone millimeter-scale microrobots for possible in vivo surgical use. For example, external magnetic fields from a clinical magnetic resonance imaging (MRI) system can move microrobots containing ferromagnetic particles through blood vessels.73–76

In the field of nanotechnology, continuing development of in vivo machines has the potential to revolutionize health care27,77–79 with devices small enough to reach and interact with individual cells of the body.80,81 Current efforts focus on the development and functionalization of nanomaterials86 that will allow their applications to enhance diagnostic imaging,87–94 targeted drug delivery,87–94 as well as a combination of both diagnosis and treatment which has been given the term called nanotheranostics.95–101 Various studies have focused on the development and applications of nanomaterials that target specific cell types for imaging and/or drug delivery.102–106

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780123850898000066

Polymyxins Resistance in Enterobacteriaceae☆

Xingyan Ma, ... Bin Huang, in Reference Module in Biomedical Sciences, 2018

Methods for Detection of Polymyxins-Resistant Enterobacteriaceae

In most clinical laboratories, the disk diffusion (DD) test, E-test strips, broth microdilution method (BMD) and the Vitek 2 system remain the routine susceptibility methods, among which the BMD is suggested as the primary method for polymyxins minimum inhibitory concentration (MIC) testing. But the truth is that conventional methods have many drawbacks (Humphries, 2015). As aless toxin prodrug commonly used for therapy, the CMS yields erroneously high MICs in vitro so that it not be used for susceptibility testing (Landman et al., 2008). The poor and slow diffusion of polymyxins lead the results of DD test nonreliable, yielding small zones of inhibition (Lo-Ten-Foe et al., 2007). False susceptibility (32%) occurred with E-test, which MICs were significantly lower than those obtained by BMD for resistant isolates (Hindler and Humphries, 2013). Their amphiphilic nature make them adhere to the polystyrene surface of BMD microdilution plates, and lower concentrations of polymyxins showed higher absorption proportionally (Karvanen et al., 2013). Although the adsorption of colistin to polystyrene can be mitigated by the addition of a surfactant such as polysorbate 80 (P-80, namely toween-80), the use of P-80 is still questionable (Humphries, 2015). All above methods are laborious, manual preparation or expensive. The automatic systems allow rapid identification and antimicrobial susceptibility testing and perform reliable results (Lo-Ten-Foe et al., 2007; Poirel et al., 2017). However, the Vitek 2 system displayed low sensitivity in the detection of polymyxins resistant Enterobacteriaceae and care should be taken in the interpretation of the known heteroresistance subpopulations (Tan and Ng, 2007; Lo-Ten-Foe et al., 2007). It is time to ask for a suitable and standard method to detect resistant subpopulations. Additionally, polymyxins resistance would disappear after long-term storage at − 70°C (Hindler and Humphries, 2013), which calls for an easy, inexpensive and sensitive techniques to screen polymyxins resistance from fresh cultures or even clinical samples directly in routine laboratories. Colistin and polymyxin B breakpoints were interpreted according to the documents given by Clinical and Laboratory and Standards Institute (CLSI) or European Committee on Antimicrobial Susceptibility Testing (EUCAST) in the past. Nowadays, the CLSI/EUCAST Joint Working Group recommended an update clinical breakpoints of Acinetobacter spp. and Pseudomonas aeruginosa, as shown in Table 2 (2016). However, there were insufficient data to establish clinical breakpoints for Enterobacteriaceae, so the epidemiological cutoff values (ECVs) of Enterobacter aerogenes, Enterobacter cloacae, Escherichia coli, Klebsiella pneumoniae, and Raoultella ornithinolytica were set, as shown in Table 3. Among Enterobacteriaceae, the MIC of other genera and species distributions may be different. Meanwhile, neither PK-PD nor clinical data have been evaluated for polymyxins with any Enterobacteriaceae. So, the ECV interpretations are applied only for laboratorians, clinicians, and public health professionals to identify isolates that have colistin MICs above the wild-type (those with acquired and/or mutational resistance mechanisms to colistin, such as mcr-1). Recently, the Rapid Polymyxin NP test based on the detection of bacterial growth in the presence of a defined polymyxin concentration has been demonstrated with high specificity (99.3%) and sensitivity (95.4%) (Nordmann et al., 2016a). A Selective medium named “SuperPolymyxin”, which contains a colistin concentration (3.5 μg/mL), can detect any type of polymyxin resistant Gram-negative organism and prevent swarming of Proteus spp. The sensitivity and specificity of this medium can reach 100% (Nordmann et al., 2016b). In molecular level, a SYBR green-based real-time PCR assay that considered as a simple, specific, sensitive, and rapid method for detection of mcr-1-positive isolates was recently published (Bontron et al., 2016).

Table 2. Polymyxins breakpoints according to the CLSI AST Subcommittee in 2016

OrganismSusceptible (μg/mL)Resistant (μg/mL)
Acinetobacter spp.≤ 2 ≥ 4
Pseudomonas aeruginosa≤ 2 ≥ 4

Table 3. Polymyxins ECVs according to the CLSI AST Subcommittee in 2016

OrganismECVs (μg/mL)
Wild-typeNon-wild-type
Enterobacter aerogenes≤ 2 ≥ 4
Enterobacter cloacae≤ 2 ≥ 4
Escherichia coli≤ 2 ≥ 4
Klebsiella pneumoniae≤ 2 ≥ 4
Raoultella ornithinolytica≤ 2 ≥ 4

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128012383641508

Which of the following methods provides for the most secure and reliable delivery of laboratory results?

Which of the following methods provides for the most secure and reliable delivery of laboratory results? Electronic reports.

Which test is performed to identify bacteremia?

The diagnosis of bacteremia is based on blood culture results [1-5]. Issues related to indications, collection technique, number of cultures, volume of blood, timing of collection, and interpretation of results will be reviewed here. The management of bacteremia is discussed separately.

Which of the following Governing bodies mandates the use of safety features on needles?

The Needlestick Safety and Prevention Act (NSPA) was signed into law in November 2000. It mandated OSHA to revise its bloodborne pathogens standard to include specific additional definitions and requirements.

When drawing a PT test you must deliver to the laboratory within quizlet?

Coagulation assays such as the prothrombin time (PT) and the activated partial thromboplastin time (APTT or PTT) no longer require being transported chilled. However, transportation should occur within "ONE hour" of collection for the PT and within "FOUR hours" for the APTT.