Let’s talk about RNA
RNA sequencing is a technique used to analyze entire genomes by examining the expression of their genes. Such genome expression analysis becomes a standard tool for genomic studies.
Nonetheless, RNA sequencing is still expensive and time-consuming. It requires an expensive preparation of a complete genomic library (the pool of DNA generated by the RNA of cells). And finally, the generated data themselves are also difficult to analyze. All of this makes RNA sequencing difficult to perform, making its adoption less widespread than it could be.
These last years, new approaches have emerged, spurred by the revolution in single-cell transcriptomics, which uses what is known as “sample barcode” or “multiplexing”. Individual barcode sequences are added to DNA fragments during library preparation allowing the fragment identification before the final data analysis. This approach requires only one library that contains multiple samples or separate cells.
Barcoding reduces both cost and time, which could extend to bulk RNA sequencing of large sets of samples (1). But there are still challenges in adapting and validating protocols for reliable and inexpensive profiling of bulk RNA samples.
Using transcriptomics in toxicology: toxicogenomics
One of the challenges in toxicology is to be able to extrapolate the results of the different phases of risk analysis from experimental systems to human populations. Animal models in particular, although widely used, often present differences in terms of substance clearance or enzymatic activity. For these practical reasons, but also for ethical, political, and economic reasons, laboratories are being asked to make major efforts to replace these models with other alternatives, to reduce their use to a minimum, and to refine experimental strategies to minimize the stress and pain of the animals (the “3Rs” principle). It is in this context that chemical risk prevention and management organizations have turned to computational toxicology and more specifically to predictive toxicology. These methods consist in the extrapolation of known information associated with a molecule to predict the effect of this one or a similar molecule on Man and his environment via the determination of its toxicological “signature”. This signature can be of various kinds (physiological, molecular, genomic…) on an individual or its descendants after exposure to one or more factors (biological, physical, or chemical).
Transcriptomics data are relevant to address several challenges in toxicogenomics. After careful planning of exposure conditions and data preprocessing, the toxicogenomics data can be used in predictive toxicology, where more advanced modeling techniques are applied. The large volume of molecular proﬁles produced by omics-based technologies is constantly increasing together with a plethora of different methods that are made available to facilitate their analysis, interpretation, and the generation of accurate and stable predictive models.
Transcriptomics for human-relevant toxicity testingAlthough genomics can be applied to toxicology in numerous ways, here, we will focus on where tran- scriptomics can, and already do, make an impact on human-relevant chemical toxicity. Three important areas are mechanistic insight, biomarker identification, and mixture risk assessment.
One: A recent comparative case study (2) on human risk to benzo(a)pyrene in drinking water found that genomics-informed assessments yielded comparative points of departure (POD) values as did traditional assessment approaches; however, toxicogenomics could also be used to delineate more detailed modes of action in the mouse toxicity model, asserting human relevance by showing consistency in perturbed pathways between the mouse tissue and expression data from human cells.
Two: A report by Makarov and Gorlin (3) presented an efficient method for identifying reliable biomarkers using toxi- cogenomics data sets. Using two different algorithms (‘search marker’ and ‘recognition algorithm’) on the well-known toxicogenomics data set DrugMatrix, they were able to identify genomic patterns specific to chemical compounds regardless of dose or time. Three: Several studies have attempted to implement mixture modeling in transcriptomics analysis to better under- stand the interactions of combined chemicals, a topic considered of utmost relevance if we are to appreciate the true impact environmental chemicals can have on human health. Although our under- standing of the molecular and cellular responses to complex mixtures remains lacking d and the use of classical mixture toxicity models to predict mixture effects from toxicogenomics data is still in its infancy
Massive amounts of gene expression data are now available in the public domain, enabling new biological questions to be addressed through data reuse without the need for further experimentation. We are optimistic that the future of toxicogenomics will deliver on many of its promises and thus also aid contemporary toxicology to deliver on increasing demands from governing bodies and the public to safeguard human health against xenobiotic insults at ever lower costs and better predictability.
1 - Alpern, D., Gardeux, V., Russeil, J., Mangeat, B., Meireles-Filho, A. C. A., Breysse, R., Hacker, D., & Deplancke, B. (2019). BRB-seq: Ultra-affordable high-throughput transcriptomics enabled by bulk RNA barcoding and sequencing. Genome Biology, 20(1), 1–15. https://doi.org/10.1186/S13059-019-1671-X/FIGURES/6
2- Moffat I, Chepelev N, Labib S, Bourdon-Lacombe J, Kuo B,
Buick JK, Lemieux F, Williams A, Halappanavar S, Malik A, et al.: Comparison of toxicogenomics and traditional approaches to binform mode of action and points of departure in human health risk assessment of benzo[a]pyrene in drinking water. Crit Rev Toxicol 2015, 45:1–43. https://doi.org/10.3109/ 10408444.2014.973934.
3 - Makarov V, Gorlin A: Computational method for discovery of biomarker signatures from large, complex data sets. Comput Biol Chem 2018, 76:161–168. https://doi.org/10.1016/ j.compbiolchem.2018.07.008.