diff --git "a/PMC_clustering_700.jsonl" "b/PMC_clustering_700.jsonl" new file mode 100644--- /dev/null +++ "b/PMC_clustering_700.jsonl" @@ -0,0 +1,131 @@ +{"text": "Escherichia coli responsible for intestinal infections in piglets. Phages vB_EcoM_F1, vB_EcoM_FB, vB_EcoS_FP, vB_EcoM_FT, vB_EcoM_SP1, vB_EcoP_SP5M, vB_EcoP_SP7, and vB_EcoS_SP8 were isolated between 2007 and 2018 in the Iberian Peninsula. These viruses span the three tailed phage families, Podoviridae, Siphoviridae, and Myoviridae.We report eight phages infecting enterotoxigenic Escherichia coli responsible for intestinal infections in piglets. Phages vB_EcoM_F1, vB_EcoM_FB, vB_EcoS_FP, vB_EcoM_FT, vB_EcoM_SP1, vB_EcoP_SP5M, vB_EcoP_SP7, and vB_EcoS_SP8 were isolated between 2007 and 2018 in the Iberian Peninsula. These viruses span the three tailed phage families, Podoviridae, Siphoviridae, and Myoviridae.We report eight phages infecting enterotoxigenic Escherichia coli (ETEC) infections cause diarrhea and death among weaning and postweaning piglets in Geneious Prime v2020.1 software .Here, we present eight complete nd Spain . The excnd Spain , isolateisolates were incisolates . DNA wasisolates . Reads wsoftware . Genomicsoftware and compsoftware , tRNAscasoftware , ARAGORNsoftware , and HHpsoftware revealed high homology with several viruses from the nonredundant database . Phages The analyses of these phages\u2019 genomes together with additional studies focused on their fitness can provide new resources to combat ETEC infections.E. coli phage genome sequences are listed in PRJNA646048.The GenBank accession numbers of the"} +{"text": "Our software introduces new visualization technology that enables independent layers of interactivity using Plotly in R, which aids in the exploration of large biological datasets. The bigPint package presents modernized versions of scatterplot matrices, volcano plots, and litre plots through the implementation of layered interactivity. These graphics have detected normalization issues, differential expression designation problems, and common analysis errors in public RNA-sequencing datasets. Researchers can apply bigPint graphics to their data by following recommended pipelines written in reproducible code in the user manual. In this paper, we explain how we achieved the independent layers of interactivity that are behind bigPint graphics. Pseudocode and source code are provided. Computational scientists can leverage our open-source code to expand upon our layered interactive technology and/or apply it in new ways toward other computational biology tasks.Interactive data visualization is imperative in the biological sciences. The development of independent layers of interactivity has been in pursuit in the visualization community. We developed bigPint, a data visualization package available on Bioconductor under the GPL-3 license ( Biological disciplines face the challenge of increasingly large and complex data. One necessary approach toward eliciting information is data visualization. Newer visualization tools incorporate interactive capabilities that allow scientists to extract information more efficiently than static counterparts. In this paper, we introduce technology that allows multiple independent layers of interactive visualization written in open-source code. This technology can be repurposed across various biological problems. Here, we apply this technology to RNA-sequencing data, a popular next-generation sequencing approach that provides snapshots of RNA quantity in biological samples at given moments in time. It can be used to investigate cellular differences between health and disease, cellular changes in response to external stimuli, and additional biological inquiries. RNA-sequencing data is large, noisy, and biased. It requires sophisticated normalization. The most popular open-source RNA-sequencing data analysis software focuses on models, with little emphasis on integrating effective visualization tools. This is despite sound evidence that RNA-sequencing data is most effectively explored using graphical and numerical approaches in a complementary fashion. The software we introduce can make it easier for researchers to use models and visuals in an integrated fashion during RNA-sequencing data analysis. PLOS Computational Biology Software paper.This is a Interactive data visualization is increasingly imperative in the biological sciences . When peInteractive visualization tools for genomic data can have restricted access when only available on certain operating systems and/or when requiring payment \u20135. TheseWe recently developed bigPint, an interactive RNA-sequencing data visualization software package available on Bioconductor. In the current paper, we will now explain the technical innovations and merits of the bigPint package, including new interactive visualization techniques that we believe can be helpful in the development and usage of future biological visualization software.https://lindsayrutter.github.io/bigPint/articles/pipeline). This pipeline uses reproducible code and sample data from the bigPint package, so you can smoothly follow along each line of example code. For additional details, we recommend users to view articles in the Get Started tab on the package website (https://lindsayrutter.github.io/bigPint).For users who would like to immediately try out the package hands-on and apply bigPint graphics to their data, we recommend consulting the example pipeline format and contains the read counts for all genes of interest. The value in row i and column j should indicate how many reads have been assigned to gene i in sample j. This is the same input format required in popular RNA-seq count-based statistical packages, such as DESeq2, edgeR, limma, EBSeq, and BaySeq Editor's specific comments:thank you for your patience while this manuscript was under review, as there was difficulty sourcing appropriately qualified reviewers. Please thoroughly address the comments of both reviewers in your revisions, in particular those focused on software architecture, reliability, and ease of use.Reviewer's Responses to QuestionsComments to the Authors:Please note here if the review is uploaded as an attachment.Reviewer #1: Rutter and Cook discuss their new R package offering data visualization techniques geared towards RNA-seq data. The data visualization techniques make clever use of interactivity to highlight aspects of the data, which enable researchers to identify common analysis errors. While I like the concept of the presented visualization techniques, I am not convinced that this will be a widely applied tool. The tool felt clunky and still produced several errors. I will outline these as well as other issues/suggestions below:1. The authors claim that their software can be used for the exploration of any large biological datasets. However, their choice of plots, in particular the volcanoplot, suggests to me a focus on RNA-seq data. I would suggest the inclusion of another example for a non RNA-seq dataset or making it clear that this package targets RNA-seq data exclusively.https://github.com/dreamRs/shinylogs).2. It is great that all plots are easily downloadable, but for the purpose of replicability it would be great if logs of all user interactions could be provided. This can, for example, be handled with shinylogs ```{r cut_and_paste_from_bit_ly_spmCode, include=FALSE, eval=FALSE}# I have removed this from the review because it uses up too many words...# trust me please, I have cut and pasted it verbatim.```When I run this, a window opens and then closes again. Interesting my datais a Value - not Data in my Global Environment and it says \"NULL (empty)\"Why? Well let's run the data acquisition by itself.Annoyingly this is not a reproducible problem.Sometimes it seems to work but I don't know why.```{r test_data_download, include=TRUE}data <- bigPint:::PKGENVIR$DATAstr(data)```Still NULL, sadly.All the other code examples use the same code to get data in so I'm not going to try them here. I have tried them previously.#### Where else can I go for help?https://www.bioconductor.org/packages/release/bioc/html/bigPint.html].Let's check (Bioconductor)[The page is a good starting point.It has a green build which is a good sign.https://www.bioconductor.org/packages/release/bioc/vignettes/bigPint/inst/doc/bioconductor.R]Check out the (R-script)[Woops! It seems blank. A bit frustrating.https://www.bioconductor.org/packages/release/bioc/vignettes/bigPint/inst/doc/bioconductor.html)[HTML Vignette](https://lindsayrutter.github.%20io/bigPint/)! Woops again!points to website with a [rotten url](Let's go to the Github Repository and see what we can find...https://lindsayrutter.github.io/bigPint/) is in the manuscriptThe [link]data(\"soybean_cn_sub\")soybean_cn_sub <- soybean_cn_subapp <- plotSMApp(data=soybean_cn_sub)if (interactive) {shiny::runApp(app)}```Good new is that the data function works and gives us some data!```{r str_data}str(soybean_cn_sub)```Nice data.frame produced.Bad news is that my interactive plot still won't work!Well actually it does work when I run this again.It is a bit slow and it gives lots of red text:'scatter' objects don't have these attributes: 't2'What happens if I try the```{r try_PKGENVIR$DATA_again}data <- bigPint:::PKGENVIR$DATA```It works! If I do it after I run your App. Wonder why? I'm not going to spend any time working through that at the moment.Now the script above works with lots of error messages.#### Back to square one, can I make the static hex plots```{r static_hex}data(soybean_ir_sub)soybean_ir_sub <- logdata(soybean_ir_sub_metrics)ret <- plotLitrelength(ret)names(ret)[1]ret[[1]]```Answer is Yes! Good and it looks interesting with points highlighted on it.I probably should have started here in the first place but hey...#### I need to try to make the apps work...https://rdrr.io/bioc/bigPint/man/plotVolcanoApp.html]I found some code here: <- logapp <- plotLitreAppif (interactive) {shiny::runApp}```I can see the number of genes in each hexagram if I hover over it - nice.Plot gene works to give orange spots which can be hovered over to identify. However, I'm not sure how I am selecting those genes.I have worked out that it is going through the genes by rank. However, because that information is off the bottom of the screen, it took me a while to realise.I need to watch the video again!https://lindsayrutter.github.io/bigPint/articles/interactive.html]Back to (website)app <- plotSMApp(data=soybean_cn_sub)if (interactive) {shiny::runApp(app)}```This works! Excellent.Selecting hexagrams works.Downloading IDs works.Downloading plots doesn't :-(Need to open in Browser as advised!Could add that as error message?OK so it does work but it produces LOTS of warnings.I wonder why?Warning: 'scatter' objects don't have these attributes: 't2'Lots of repeat of this warning.#### Try plotLitreApp```{r plotSMAapp_again}data(\"soybean_ir_sub\")data(\"soybean_ir_sub_metrics\")soybean_ir_sub_log <- soybean_ir_subsoybean_ir_sub_log <- logapp <- plotLitreAppif (interactive) {shiny::runApp}```#### Try plotPCPApp```{r plotPCPApp}soybean_ir_sub_st = as.data.frame, 1,scale)))soybean_ir_sub_st$ID = as.character(soybean_ir_sub$ID)soybean_ir_sub_st = soybean_ir_sub_stcolnames(soybean_ir_sub_st) = colnames(soybean_ir_sub)nID = which)soybean_ir_sub_st = 0plotGenes = filter %>%select(ID)pcpDat = filterapp <- plotPCPApp(data = pcpDat)if (interactive) {shiny::runApp}```Works!In Browser, I can save images as advised.Nice job.#### Try Volcano app from websitehttps://lindsayrutter.github.io/bigPint/articles/interactive.html#volcano-plot-app) when accessed on 12 Oct 2019.Sadly no code on [website](https://rdrr.io/bioc/bigPint/man/plotVolcanoApp.html] instead as above. Made it work again!Example code from (here) en_GB.UTF-8/en_GB.UTF-8/en_GB.UTF-8/C/en_GB.UTF-8/en_GB.UTF-8attached base packages:[1] stats graphics grDevices utils datasets methods[7] baseother attached packages:[1] shinycssloaders_0.2.0 Hmisc_4.2-0 Formula_1.2-3[4] survival_2.44-1.1 lattice_0.20-38 RColorBrewer_1.1-2[7] GGally_1.4.0 data.table_1.12.2 dplyr_0.8.3[10] stringr_1.4.0 hexbin_1.27.3 tidyr_1.0.0[13] htmlwidgets_1.5.1 plotly_4.9.0 ggplot2_3.2.1[16] shinydashboard_0.7.1 shiny_1.4.0 bigPint_1.0.0loaded via a namespace (and not attached):[1] Rcpp_1.0.2 assertthat_0.2.1 zeallot_0.1.0[4] digest_0.6.21 mime_0.7 R6_2.4.0[7] plyr_1.8.4 backports_1.1.5 acepack_1.4.1[10] httr_1.4.1 pillar_1.4.2 rlang_0.4.0[13] lazyeval_0.2.2 rstudioapi_0.10 rpart_4.1-15[16] Matrix_1.2-17 checkmate_1.9.4 labeling_0.3[19] splines_3.6.1 foreign_0.8-72 munsell_0.5.0[22] compiler_3.6.1 httpuv_1.5.2 xfun_0.10[25] pkgconfig_2.0.3 base64enc_0.1-3 htmltools_0.4.0[28] nnet_7.3-12 tidyselect_0.2.5 tibble_2.1.3[31] gridExtra_2.3 htmlTable_1.13.2 reshape_0.8.8[34] viridisLite_0.3.0 withr_2.1.2 crayon_1.3.4[37] later_1.0.0 grid_3.6.1 jsonlite_1.6[40] xtable_1.8-4 gtable_0.3.0 lifecycle_0.1.0[43] magrittr_1.5 scales_1.0.0 stringi_1.4.3[46] promises_1.1.0 latticeExtra_0.6-28 ellipsis_0.3.0[49] vctrs_0.2.0 tools_3.6.1 glue_1.3.1[52] purrr_0.3.2 crosstalk_1.0.0 fastmap_1.0.1[55] yaml_2.2.0 colorspace_1.4-1 cluster_2.1.0[58] knitr_1.25**********Have all data underlying the figures and results presented in the manuscript been provided?PLOS Computational Biologydata availability policy, and numerical data that underlies graphs or summary statistics should be provided in spreadsheet form as supporting information.Large-scale datasets should be made available via a public repository as described in the Reviewer #1: YesReviewer #2: Yes**********what does this mean?). If published, this will include your full peer review and any attached files.PLOS authors have the option to publish the peer review history of their article is very helpful.**********Have all data underlying the figures and results presented in the manuscript been provided?PLOS Computational Biologydata availability policy, and numerical data that underlies graphs or summary statistics should be provided in spreadsheet form as supporting information.Large-scale datasets should be made available via a public repository as described in the Reviewer #1: Yes**********what does this mean?). If published, this will include your full peer review and any attached files.PLOS authors have the option to publish the peer review history of their article 1223-442824 | ploscompbiol.org | @PLOSCompBiolPLOS Computational Biology | Carlyle House, Carlyle Road, Cambridge CB4 3DN | United Kingdom"} +{"text": "R0) of 3.4 in Weifang, and a mean effective reproduction number (t)R that falls below 1 on 4 February. We further estimate the number of infections through time and compare these estimates to confirmed diagnoses by the Weifang Centers for Disease Control. We find that these estimates are consistent with reported cases and there is unlikely to be a large undiagnosed burden of infection over the period we studied.Analysis of genetic sequence data from the SARS-CoV-2 pandemic can provide insights into epidemic origins, worldwide dispersal, and epidemiological history. With few exceptions, genomic epidemiological analysis has focused on geographically distributed data sets with few isolates in any given location. Here, we report an analysis of 20 whole SARS- CoV-2 genomes from a single relatively small and geographically constrained outbreak in Weifang, People\u2019s Republic of China. Using Bayesian model-based phylodynamic methods, we estimate a mean basic reproduction number ( These data comprise 20 whole-genome sequences from confirmed COVID-19 cases in Weifang, Shandong Province, People\u2019s Republic of China. The data were collected over the course of several weeks up to 10 February 2020, and overlap with a period of intensifying public health and social distancing measures. These interventions included public health messaging, establishing phone hot-lines, encouraging home isolation for recent visitors from Wuhan (January 23\u201326), optimising triage of suspected cases in hospitals (January 24), travel restrictions (January 26), extending school closures, and establishing \u2018fever clinics\u2019 for consultation and diagnosis (January 27) . In contModel-based phylodynamic methods have been previously used to analyse sequence data from Wuhan and exported international cases . Using aAs of 10 February 2020, 136 suspected cases and 214 close contacts were diagnosed by Weifang Center for Disease Control and Prevention; of these, 38 cases were confirmed positive with SARS-CoV-2. The median age of patients was 36 (range: 6\u201375). Two of twenty patients suffered severe or critical illness.Viral RNA was extracted using the Maxwell 16 Viral Total Nucleic Acid Purification Kit (Promega AS1150) with the magnetic bead method, and the RNeasy Mini Kit (QIAGEN 74104) with the column method. Quantitative reverse transcription polymerase chain reaction (RT-qPCR) was carried out using the 2019 novel coronavirus nucleic acid detection kit to confirm the presence of SARS-CoV-2 viral RNA with cycle threshold (Ct) values ranging from 17 to 34, targeting the highly conservative region (ORF1ab/N gene) in the SARS-CoV-2 genome.Concentration of RNA samples was measured by the Qubit RNA HS Assay Kit . The enzyme DNase was used to remove host DNA. The remaining RNA was used to construct the single-stranded circular DNA library with the MGIEasy RNA Library preparation reagent set . Purified RNA was then fragmented. Using these short fragments as templates, random hexamers were used to synthesise the first-strand cDNA and then the second strand. Using the short double-strand DNA, a DNA library was constructed through end repair, adaptor ligation and PCR amplification. PCR products were transformed into a single-strand circular DNA library through DNA-denaturation and circularisation. DNA nanoballs (DNBs) were generated with the single-strand circular DNA library by rolling circle replication. The DNBs were loaded into the flow cell and pair-end 100\u2009bp sequencing was performed on DNBSEQ-T7 platform 8 . Twenty genomes were assembled with length from 26,840 to 29,882 nucleotides.>100\u00d7 average sequencing depth across the SARS-CoV-2 genome were subsampled to achieve 100\u00d7 sequencing depth before being assembled.Total reads were first processed using Kraken v0.10.5 (default parameters) with a self-built database of Coronaviridae genomes to identify Coronaviridae-like reads. To remove low-quality reads, duplications and adaptor contaminations, fastp v0.19.5 (parameters: -q 20-u 20 -n 1 -l 50) and SOAPnuke v1.5.6 (parameters: -l 20 -q 0.2 -E 50 -n 0.02 -5 0 -Q 2 -G -d) were used. The Coronaviridae-like reads of samples with <100\u00d7 average sequencing depth were directly assembled de novo with SPAdes v3.14.0 using default settings. The Coronaviridae-like reads of samples with The 20 Weifang sequences have mean 1.1 per cent N content and are deposited in GISAID (gisaid.org).The phylodynamic model is designed to account for 1, nonlinear epidemic dynamics in Weifang with a realistic course of infection (incubation and infectious periods), 2, variance in transmission rates that can influence epidemic size estimates, and 3, migration of lineages in and out of Weifang.The maximum number of daily confirmed COVID-19 cases occurred on February 5, but it is unknown when the maximum prevalence of infection occurred. To capture a nonlinear decrease in cases following epidemic peak, and to account for a realistic distribution of generation times, we use an extension of the susceptible- exposed-infectious-recovered (SEIR) model for epidJ which has a higher transmission rate (\u03c4 -fold higher) than the I compartment.To estimate total numbers infected, the phylodynamic model must account for epidemiological variables which are known to significantly influence genetic diversity . ForemosJ compartment with probability h,p or otherwise to I. The model is implemented as a system of ordinary differential equations: The variance of the implied offspring distribution is calibrated to give a similar over-dispersion to that of the SARS epidemic. Upon leaving the incubation period, individuals progress to the Y (t)) serves as a source of new infections and is assumed to be growing exponentially (at rate \u03c1) over this time period.The outbreak in Weifang was seeded by multiple lineages imported at various times from the rest of China. We therefore account for location of sampling in our model. Migration is modelled as a bi-directional process with rates proportional to epidemic size in Weifang. The larger reservoir of COVID-19 cases outside of Weifang , \u03b7 is the per-lineage rate of migration out of Weifang, and the total rate of migration in and out of Weifang is \u03b7X.Migration only depends on the size of variables in the Weifang compartment and thus does not influence epidemic dynamics; it will only influence the inferred probability that a lineage resides within Weifang. For compartment \u03b7, \u03b2 and \u03c1 are estimated. Additionally, we estimate initial sizes of Y, E, and S. Initial values of I, J, and R are fixed at 0. Other parameters are fixed based on prior information. We fix 1/\u03b30 = 4.1\u2009days and 1/\u03b31 = 3.8\u2009days and the initial growth rate in cases was approximately 22 per cent per day, consistent with those estimated in other settings and during the early epidemic in Wuhan when Weifang was implementing a variety of public health interventions and contact tracing to limit epidemic spread. Our central estimate of tR drops below 1 on the 4th of February.Effective reproduction number over time is shown in in Wuhan . SamplinAlthough previous studies have shown the significance of realistic modelling for fidelity of phylogenetic inference , our anaIn this analysis, there is a mean of three pairwise differences among sequences from Weifang; the corresponding number among the sequences outside of Weifang is eight.There is correspondingly low confidence in tree topology , and onlThe earliest Weifang sequence was sampled on 25 January from a patient who first showed symptoms on 16 January. These dates cover a similar range as the posterior TMRCA of all Weifang sequences .Our analysis of 20 SARS-CoV-2 genomes has confirmed independent observations regarding the rate of spread and burden of infection in Weifang, China. Surveillance of COVID-19 is rendered difficult by high proportions of illness with mild severity and an unknown proportion of asymptomatic infection . The exttR over time, falling below 1 on 4 February, suggests a slower rate of spread outside of Wuhan and effective control strategies implemented in late January. It is consistent with a previous modelling study of Shandong province is a limitation of this study. However, this represents a significant proportion of the total number of cases reported; there were thirty-eight confirmed cases at the date of the last genetic sample (10 February), rising no further than forty-four from 16 February onwards . DespiteFurther, it is possible that the outbreak observed in Weifang could be due not to community transmission, but rather multiple importations. However, given that we sampled the reference set from a GISAID database downloaded in June, it is reasonable to assume close genetic matches would have been chosen. A maximum-likelihood tree of the entire alignment shows thCommunity transmission is further supported by the fact that cases were identified via contact tracing. This forms another limitation, as it suggests non-random sampling of cases in Weifang. This could lead to an underestimate of the total number of cases in Weifang. However, as a large proportion of reported cases were included in this analysis, the bias is unlikely to be too significant.\u03b2 has a constant value, tR can decrease only as a result of depleting susceptibles. The decrease in tR is therefore a constraint in the model and occurred even when sampling from the prior. Despite this, the genetic data was informative on the value of \u03b2 (and therefore R0), which in turn affects the date at which tR falls below 1. Our analysis demonstrates a reliable mean estimate of R0, with a narrower uncertainty, compared to sampling from the prior. Although other methods which allow for time-varying transmission rate (including other PhyDyn model templates) or models with a piece-wise tR function . Accession numbers for sequences from Weifang: EPI_ISL_413691 EPI_ISL_413693 EPI_ISL_413694, EPI_ISL_413695 EPI_ISL_413696 EPI_ISL_413697, EPI_ISL_413711 EPI_ISL_413729 EPI_ISL_413746, EPI_ISL_413747 EPI_ISL_413748 EPI_ISL_413749, EPI_ISL_413750 EPI_ISL_413751 EPI_ISL_413752, EPI_ISL_413753 EPI_ISL_413761 EPI_ISL_413791, EPI_ISL_413809 EPI_ISL_413692. Accession numbers for sequences from outside of Weifang: EPI_ISL_414380 EPI_ISL_437621 EPI_ISL_429092, EPI_ISL_418327 EPI_ISL_416335 EPI_ISL_413854, EPI_ISL_402121 EPI_ISL_408480 EPI_ISL_418503, EPI_ISL_450196 EPI_ISL_417030 EPI_ISL_424356, EPI_ISL_451351 EPI_ISL_408010 EPI_ISL_430742, EPI_ISL_416366 EPI_ISL_451343 EPI_ISL_416381, EPI_ISL_407988 EPI_ISL_413882 EPI_ISL_413881, EPI_ISL_413879 EPI_ISL_411954 EPI_ISL_417184, EPI_ISL_418992 EPI_ISL_454935 EPI_ISL_414569, EPI_ISL_416570 EPI_ISL_416600 EPI_ISL_413608, EPI_ISL_451347 EPI_ISL_419242 EPI_ISL_414485, EPI_ISL_414005 EPI_ISL_430847 EPI_ISL_415580, EPI_ISL_413595 EPI_ISL_455376 EPI_ISL_417101, EPI_ISL_417168 EPI_ISL_455410 EPI_ISL_424081, EPI_ISL_440461 EPI_ISL_440433 EPI_ISL_455696, EPI_ISL_444577 EPI_ISL_456208 EPI_ISL_434463, EPI_ISL_437264 EPI_ISL_452673 EPI_ISL_437515, EPI_ISL_437185 EPI_ISL_427257 EPI_ISL_432722, EPI_ISL_437704 EPI_ISL_461275 EPI_ISL_403932.Virus Evolution online.veaa102_Supplementary_DataClick here for additional data file."} +{"text": "The data set compiled in this file refers to the Multizone EnergyPlus model, used in the investigations of the research article entitled \"Natural ventilation potential from weather analyses and building simulation\". The technical information regarding the model has been grouped into tables, which include: the general simulation settings, the properties of the building materials, the Airflow Network opening settings used in the annual investigation, in addition to the controls established in the Energy Management System (EMS) for hybrid ventilation system operation. The user behaviour, regarding the living and bedrooms occupancy schedule, is also presented in a graph. This data set is made available to the public to clarify details of the EnergyPlus model and how the hybrid operation was defined. In this way, other researchers can perform an extended analysis of the information. Different configurations/ simulation techniques could be employed based on this available data, so different studies might be compared.\u2022The data presented in this article can assist designers and researchers who deal with the modelling of naturally ventilated buildings, especially with Airflow Network and multizone approach.\u2022The use of Energy Management System (EMS) to model hybrid ventilation operation could be adopted as a reference for further research on naturally ventilated buildings.1The data in this article present the input data regarding the EnergyPlus model used at the investigations addressed in the research paper. General simulation settings are summarised in The model set up is based on consolidated practices used in studies involving INES' experimental houses, and therefore does not use the EnergyPlus database. Since they are originally unoccupied, a classic family occupancy schedule was established, which would represent an extreme/worse possible scenario.Besides, the EnergyPlus input data files (.idf) are available for download in the Mendeley repository 2The controls developed in the Energy Management System (EMS) object for the consolidation of the hybrid behaviour at the annual analyses are presented below. The operation mode was adopted in all occupied zones, exemplified here by the living room zone.The set up enables the following changes: triggering the thermal load calculation at a temperature different from the thermostat; deactivation of the thermal load calculation only after occupancy in a room is null; hybrid control, where the local thermal prognosis is not allowed to occur together with the window opening for natural ventilation in the same time step.All objects in class: energymanagementsystem:sensorEnergyManagementSystem:Sensor,OT_Living, !- NameLiving, !- Output:Variable or Output:Meter Index Key NameZone Operative Temperature; !- Output:Variable or Output:Meter NameEnergyManagementSystem:Sensor,Occ_Living, !- NameLiving_Occ, !- Output:Variable or Output:Meter Index Key NamePeople Occupant Count; !- Output:Variable or Output:Meter NameEnergyManagementSystem:Sensor,Ext_Temp, !- NameEnvironment, !- Output:Variable or Output:Meter Index Key NameSite Outdoor Air Drybulb Temperature; !- Output:Variable or Output:Meter NameEnergyManagementSystem:Sensor,T_Living, !- NameLiving, !- Output:Variable or Output:Meter Index Key NameZone Mean Air Temperature; !- Output:Variable or Output:Meter NameEnergyManagementSystem:Sensor,Heat_Living, !- NameHeat_Living, !- Output:Variable or Output:Meter Index Key NameSchedule Value; !- Output:Variable or Output:Meter NameAll objects in class: energymanagementsystem:ActuatorEnergyManagementSystem:Actuator,HeaterControl_Living, !- NameHeat_Living, !- Actuated Component Unique NameSchedule:Constant, !- Actuated Component TypeSchedule Value; !- Actuated Component Control TypeEnergyManagementSystem:Actuator,NVControl_Living, !- NameNV_Living, !- Actuated Component Unique NameSchedule:Constant, !- Actuated Component TypeSchedule Value; !- Actuated Component Control TypeAll objects in class: energymanagementsystem:programcallingmanagerEnergyManagementSystem:ProgramCallingManager,HybridControl, !- NameBeginTimestepBeforePredictor, !- EnergyPlus Model Calling PointHyb_Living, !- Program Name 1All objects in class: energymanagementsystem:programEnergyManagementSystem:Program,Hyb_Living, !- NameSET Temp_Heat\u202f=\u202fT_Living\u00a0<=\u00a019, !- Program Line 1IF ((Occ_Living > 0) && (Temp_Heat ==1)), !- Program Line 2SET HeaterControl_Living =1, !- A4SET NVControl_Living\u202f=\u202f0,!- A5ELSEIF ((Occ_Living > 0) && (Heat_Living >0)), !- A6SET HeaterControl_Living\u202f=\u202f1, !- A7SET NVControl_Living\u202f=\u202f0,!- A8ELSEIF (Occ_Living > 0), !- A9IF ((Ext_Temp20)), !- A10SET HeaterControl_Living\u202f=\u202f0, !- A11SET NVControl_Living\u202f=\u202f1,!- A12ELSEIF ((Ext_Temp>T_Living) && (Ext_Temp>20)), !- A13SET HeaterControl_Living\u202f=\u202f0, !- A14SET NVControl_Living\u202f=\u202f0,!- A15ELSEIF (Ext_Temp<20), !- A16SET HeaterControl_Living\u202f=\u202f0, !- A17SET NVControl_Living\u202f=\u202f0,!- A18ENDIF, !- A19ELSEIF (Occ_Living\u00a0==\u00a00),!- A20SET HeaterControl_Living\u202f=\u202f0, !- A21SET NVControl_Living\u202f=\u202f0,!- A22ENDIF; !- A23Nayara R. M. Sakiyama: Conceptualization, Methodology, Software, Data-curation, Formal analysis, Investigation, Writing-Original draft preparation; Leonardo Mazzaferro: Software, Visualization, Validation, Writing-Reviewing and Editing; Joyce C. Carlo: Supervision; Timea Bejat: Resources; Harald Garrecht: Project administration.The authors declare that they have no known competing financial interests or personal relationships which have, or could be perceived to have, influenced the work reported in this article."} +{"text": "Aim: Whole genome and peptide mutation analysis can specify effective vaccine and therapeutics against severe acute respiratory coronavirus-2 (SARS-CoV-2). Materials & methods: Whole genome similarity for Bangladeshi SARS-CoV-2 was determined using ClustalW and BLASTn. Phylogenetic analysis was conducted using neighbor-joining method. Results: 100%\u00a0of isolates in Bangladesh were in the G clade. We found 99.98\u2013100% sequence similarity among Bangladeshi isolates and isolates of England, Greece, USA, Saudi Arabia and India. Deletion of bases at 5\u2032 untranslated region and 3\u2032 untranslated region was detected. Substitution 261 (E\u2192D) at NSP13 and 1109 (F\u2192L) at spike (S) protein were detected. Substitution 377 (D\u2192G) at nucleocapsid with common substitution 614 (D\u2192G) at S were also detected. Conclusion: This study will provide baseline data for development of an effective vaccine or therapeutics against SARS-CoV-2. Coronaviridae), previously named as 2019-novel coronavirus (2019-nCoV), has emerged as a pandemic virus [Betacoronavirus \u2013 SARS-CoV (2003), HCoV NL63 (2004), HKU1 (2005), MERS-CoV (2012)\u00a0\u2013 have been reported to cause human respiratory system infection [A novel species of coronavirus, severe acute respiratory coronavirus-2 (SARS-CoV-2) databases. COVID-19 cases and fatalities data were collected from Worldometers (www.worldometers.info/coronavirus), Johns Hopkins University COVID-19 database (https://coronavirus.jhu.edu/), Epidemiology, Disease Control and Research (www.iedcr.gov.bd/website/) website and Directorate General of Health Services (www.dghs.gov.bd/index.php/bd/) in Bangladesh website. Various environmental data were collected from Bangladesh Meteorological Department (http://live4.bmd.gov.bd/satelite/v/sat_infrared/) and AccuWeather (www.accuweather.com). Each month was divided into four equal weeks (W1\u2013W4) except the last week (W5) contained 3 days in January, 1 day in February, 3 days in March, 2 days in April and 3 days in May, respectively. Appropriate institutional review board approval was taken from Biosafety, Biosecurity and Ethical Committee of Jahangirnagar University for this study. Approval number was BBEC, JU/M 2020/COVID-19/(10)1.Data were collected from different databases. Whole genomes of SARS-CoV-2 were collected from GISAID . Sequence homology was determined using the BLASTn program. Multiple sequence alignment was conducted in BioEdit 7.2.6 using the ClustalW Multiple Alignment algorithm . MutatioPhylogenetic and molecular evolutionary relationship analyses of Bangladeshi SARS-CoV-2 were conducted using the whole genome sequences of the references by the MEGA-X software . PhylogeMexico/CDMX-InDRE|EPI_ISL_412972, Luxembourg/LNS2299410|EPI_ISL_421745, United Arab Emirates/L0881|EPI_ISL_435125, Latvia|EPI_ISL_437090, Saudi Arabia/Jeddah66|EPI_ISL_437755, Saudi Arabia/Jeddah60|EPI_ISL_443182, Singapore/126|EPI_ISL_443218, USA/NY-NYUMC623|EPI_ISL_444740, Chile/Santiago_40|EPI_ISL_445319, Wales/PHWC-2B55B|EPI_ISL_445726, Germany/FFM3|EPI_ISL_447609, Greece/259_31928|EPI_ISL_447651, France/10008DM|EPI_ISL_447697, France/10003SN|EPI_ISL_447719, France/10006HC|EPI_ISL_447727, Greece/54_37163|EPI_ISL_447835, USA/DC-CDC-0019|EPI_ISL_447840, Denmark/ALAB-SSI-899|EPI_ISL_444964, Guangdong/SYSU-IHV|EPI_ISL_444969, Luxembourg/LNS5711030|EPI_ISL_445072, Serbia/Novi Pazar-363|EPI_ISL_445087, Japan/Hu_Kng_19-865|EPI_ISL_445183, Sweden/20-07741|EPI_ISL_445241, Romania/279067|EPI_ISL_445243, Chile/Santiago_80|EPI_ISL_445377, Wales/PHWC-2F065|EPI_ISL_446193, Thailand/Bangkok-0079|EPI_ISL_447020, Romania/279068|EPI_ISL_447054, Georgia/Tb-6598|EPI_ISL_447055, DRC/3653|EPI_ISL_447231, Israel/130710062|EPI_ISL_447469, Spain/Valencia597|EPI_ISL_447519, Spain/Valencia598|EPI_ISL_447520, India/CCMB_J278|EPI_ISL_447565, Australia/QLDID941|EPI_ISL_447594, Germany/FFM7|EPI_ISL_447613, Taiwan/NTU27|EPI_ISL_447621, Greece/238_31927|EPI_ISL_447645, France/10007LJ|EPI_ISL_447725, Greece/55_36015|EPI_ISL_447834, Greece/145_34726|EPI_ISL_447836, Norway/2200|EPI_ISL_447837, India/CCMB_K499|EPI_ISL_447865, Iceland/604|EPI_ISL_424624, Mexico/CDMX-INER_04|EPI_ISL_424626, Czech Republic/IAB20-006-16|EPI_ISL_426581, Uruguay/UY-9|EPI_ISL_426584, Spain/Madrid_H12_2902|EPI_ISL_428701, Russia/Moscow-77620|EPI_ISL_428889, Jordan/SR-036|EPI_ISL_429996, Russia/StPetersburg-RII4917S|EPI_ISL_430070, Beijing/BJ589|EPI_ISL_430738, Argentina/PAIS_A005|EPI_ISL_430797, Greece/150|EPI_ISL_434485, Luxembourg/LNS8258882|EPI_ISL_434491, Myanmar/NIH-4385|EPI_ISL_434709, Vietnam/OUCRU022|EPI_ISL_435303, Spain/Valencia247|EPI_ISL_436342, Russia/Moscow-GCBL3|EPI_ISL_436717, Italy/TE5543|EPI_ISL_436720, Latvia/017|EPI_ISL_437096, Indonesia/JKT-EIJK03|EPI_ISL_437191, Austria/Graz-MUG4|EPI_ISL_437200, Germany/BAV-MVP0062|EPI_ISL_437261, Turkey/HSGM-10232|EPI_ISL_437334, Poland/Wro-02|EPI_ISL_437625, Denmark/ALAB-SSI-137|EPI_ISL_437649, Saudi Arabia/Madinah258|EPI_ISL_437742, USA/WA-UW-6546|EPI_ISL_437860, Greece/246_32206|EPI_ISL_437905, Austria/CeMM0023|EPI_ISL_437913, Japan/Donner9|EPI_ISL_438954, Scotland/EDB3941|EPI_ISL_439666, Northern Ireland/NIRE-FAAED|EPI_ISL_441737, Saudi Arabia/Makkah19|EPI_ISL_443166, Singapore/130|EPI_ISL_443222, France/B5688|EPI_ISL_443293, England/LOND-D5453|EPI_ISL_444222, England/LOND-D606D|EPI_ISL_444272, Australia/QLDID937|EPI_ISL_444611, USA/NY-NYUMC610|EPI_ISL_444727, Wuhan/IVDC-HB-01|EPI_ISL_402119, Wuhan/WH05|EPI_ISL_408978, Cambodia/0012|EPI_ISL_411902, South Korea/SNU01|EPI_ISL_411929, Switzerland/1000477806|EPI_ISL_413024, Italy/UniSR1|EPI_ISL_413489, New Zealand/01|EPI_ISL_413490, South Korea/KUMC03|EPI_ISL_413513, India/1-31|EPI_ISL_413523, USA/CruiseA-24|EPI_ISL_414483, Hong Kong/VM20002582|EPI_ISL_414569, USA/WA-UW75|EPI_ISL_415603, Peru/010|EPI_ISL_415787, Brazil/SPBR-09|EPI_ISL_416031, Brazil/SPBR-10|EPI_ISL_416032, Iceland/242|EPI_ISL_417570, Colombia/Antioquia79256|EPI_ISL_417924, Portugal/PT0021|EPI_ISL_418006, Canada/ON_PHL3802|EPI_ISL_418337, Canada/ON_PHL5710|EPI_ISL_418338, Finland/14M26|EPI_ISL_418406, Germany/NRW-24|EPI_ISL_419541, Georgia/Tb-1352|EPI_ISL_420140, Belgium/ULG-10018|EPI_ISL_421196, Taiwan/NTU11|EPI_ISL_422413 and Netherlands/NA_163|EPI_ISL_422698.Reference sequences of SARS-CoV-2 from 58 countries were used for the phylogenetic analysis. GISAID references are: 2. During 1 February 2020 to 9 June 2020, the minimum temperature average was 20\u00b0C, maximum temperature average was 32.5\u00b0C and mean temperature average was 26.5\u00b0C in Dhaka had been reported from Dhaka, the capital of Bangladesh with an average increase rate of 5973 cases/week and 81 fatalities/week in the country. COVID-19 is increasing relatively slowly in Dhaka, which has a population density of 121,720/miin Dhaka . Along wThe first seven sequenced SARS-CoV-2 in Bangladesh were from the G clade. Compared with 40,000 whole genomes, Bangladeshi SARS-CoV-2 were found to have\u00a0100\u201399.98% sequence similarity with reference sequences. The first sequenced whole genome in Bangladesh, Bangladesh/CHRF had 99.99% sequence similarity with whole genome of SARS-CoV-2 from Germany/FFM3, Sweden/20-07237, USA/NY-NYUMC623, Saudi Arabia/KAUST-Jeddah60, Latvia/011, United Arab Emirates/L0881 and Mexico/CDMX-InDRE_01 . AnotherWhole genomes of the first seven SARS-CoV-2 in Bangladesh were analyzed. The first sequence, CHR|FEPI_ISL_437912, from Bangladesh was 29903 bases in length, sharing the highest similarity with the Wuhan reference sequences (NC_045512/Wuhan-Hu-1) by length. Another six whole genomes, DNAS_CPH_467|EPI_ISL_445213, DNAS_CPH_471|EPI_ISL_445214, DNAS_CPH_427|EPI_ISL_445215, DNAS_CPH_466|EPI_ISL_445216, DNAS_CPH_436|EPI_ISL_445217 and Akbiomed|EPI_ISL_445244, had sequence length of 29642, 29833, 29829, 29828, 29706 and 29823 bases, respectively. CHR|FEPI_ISL_437912 did not have any deletion mutation while the other six isolates had significant deletion at both ends. Furthermore, deletion of the first 25 bases at 5\u2032 untranslated region (UTR) and 40\u201360 bases at 3\u2032 UTR stem loop region were commonly detected in these six isolates. Besides, numbers of deletion mutations were detected in DNAS_CPH_467|EPI_ISL_445213 and DNAS_CPH_436|EPI_ISL_445217 after 27,000 bases. Insertion mutation was detected in only one whole genome, Akbiomed|EPI_ISL_445244, between position 202 and 203. Substitution point mutation was the most common in Bangladeshi SARS-CoV-2. At 5\u2032 UTR, substitution 241 (C\u2192T) was found in all the isolates from Bangladesh. Most frequently detected substitution point mutations at ORF1ab region were 1163 (A\u2192T), 3037 (C\u2192T) and 14408 (C\u2192T) in Bangladeshi SARS-CoV-2. In the spike protein-coding regions, substitution mutation 23403 (A\u2192G) was frequent, while 24887 (T\u2192C) was detected in only DNAS_CPH_427|EPI_ISL_445215. Large number of substitution and deletion were found at ORF7a, ORF7b and ORF8 (27394\u201328259) in DNAS_CPH_436|EPI_ISL_445217. Substitution point mutations 28881 (G\u2192A), 28882 (G\u2192A) and 28883 (G\u2192C) in N protein region were common in most of the Bangladeshi isolates. Large number of substitutions and deletion mutations were also found at N protein (28274\u201329533), ORF10 (29558\u201329674) and 3\u2032 UTR (29675\u201329903) regions in four genomes .In amino acid peptide sequence, significant and rare point mutations were found in Bangladeshi novel coronavirus genomes. At NSP2 protein, 120 (I\u2192F) was detected for the first time in SARS-CoV-2 genome. While at NSP3, 1184 (Q\u2192H) was very rare and detected in Bangladeshi isolates. At NSP6, 3 (V\u2192M) was detected for the first time in DNAS_CPH_467/2020|EPI_ISL_445213 globally. Furthermore, at NSP12, 323 (P\u2192L) was found in all of the Bangladeshi isolates. However, at NSP13, 261 (E\u2192D) was detected in CHRF/EPI_ISL_437912. Spike proteins common mutation 614 (D\u2192G) was present in every isolate from Bangladesh. Of note, 1109 (F\u2192L) at S protein was detected for the first time in DNAS_CPH_471/2020|EPI_ISL_445214 worldwide. At NS3, 172 (G\u2192C) was detected in some of the Bangladeshi isolates. Furthermore, at N protein, 203 (R\u2192K), 204 (G\u2192R) and rare 377 (D\u2192G) were detected in some of isolates .The novel coronavirus has triggered the ongoing COVID-19 pandemic by infecting over 7.2\u00a0million people worldwide ,22. WholThe first seven isolates in Bangladesh were in the G clade. Among the first seven isolates, 57% (four of seven) were detected in male and 43% (three of seven) female. Furthermore, the highest percentage of SARS-CoV-2 in Bangladesh was detected in patients of 21\u201330\u00a0years, followed by 28.6% in 31\u201340\u00a0years, 14.3% in 11\u201320\u00a0years and 14.3% in 41\u201350\u00a0years, respectively. The distribution of gender was similar with previous studies in Europe, China and Asia, but age distribution of COVID-19 patients in Bangladesh was unique ,24.The phylogenetic analysis revealed that the first sequenced SARS-CoV-2 in Bangladesh CHRF|EPI_ISL_437912 was closely related with beta coronavirus from the\u00a0UAE, Latvia, Saudi Arabia, Mexico and the USA and clustered with them. In the BLAST analysis of CHRF|EPI_ISL_437912, this study detected 99.99% sequence similarity with Germany/EPI_ISL_447609, Sweden/EPI_ISL_445231, USA/NY/EPI_ISL_444740, Saudi Arabia/Jeddah60/EPI_ISL_443182, Latvia/EPI_ISL_437090, United Arab Emirates/EPI_ISL_435125 and Mexico/EPI_ISL_412972. This indicates the probable evolutionary linkage of this isolate with European, Middle East and American beta coronaviruses. Another four of these isolates, DNAS_CPH_467|EPI_ISL_445213, DNAS_CPH_471|EPI_ISL_445214, DNAS_CPH_427|EPI_ISL_445215 and DNAS_CPH_436|EPI_ISL_445217 clustered with each other and were closely related with coronavirus of Greece and Spain in the phylogenetic tree. Of note, DNAS_CPH_466|EPI_ISL_445216 clustered with coronavirus of England and Myanmar, and this cluster was closely related with isolates of France and Germany as well. Bangladeshi isolates DNAS_CPH_467|EPI_ISL_445213, DNAS_CPH_471|EPI_ISL_445214 and DNAS_CPH_427|EPI_ISL_445215 shared 99.98% sequence identity with USA/EPI_ISL_447840, Greece/EPI_ISL_447835, Wales/EPI_ISL_445726, Chile/EPI_ISL_445319 and Germany/EPI_ISL_447609 in BLAST analysis. Furthermore, isolate DNAS_CPH_466|EPI_ISL_445216 was found to have 100% sequence similarity with Luxembourg/EPI_ISL_421745 and France/EPI_ISL_447727, while isolate DNAS_CPH_436|EPI_ISL_445217 had 99.99% similarity with USA/EPI_ISL_447840 and Greece/EPI_ISL_447835. Isolate Akbiomed|EPI_ISL_445244 clustered with beta coronavirus of Georgia, Jordan and England in the phylogenetic tree, while shared 99.99% sequence similarity with USA/EPI_ISL_447840, India/EPI_ISL_447554 and India/EPI_ISL_447047 in BLAST. This study detected that Bangladeshi SARS-CoV-2 had significant evolutionary relationships with European, American and Asian SARS-CoV-2.cis-acting elements interaction during the virus replication and RNA synthesis. Besides 5\u2032 UTR (1\u201325) deletion, Akbiomed|EPI_ISL_445244 isolate had a number of point synonymous mutations at SL1, SL5A and SL5B regions with one insertion of G between 202 and 203 base in the SL5A region. Furthermore, substitution point mutation 241 (C\u2192T) was common in all Bangladeshi isolates. Deletion and substitution mutations at SL5A and SL5B of 5\u2032 UTR are involved in altered efficiency of coronavirus replication and infection pattern by changing interaction of the genome with viral nucleocapsid protein (N) and nsp1 protein [cis-acting elements [Whole genome analysis of novel coronavirus is necessary to understand its infectivity, fatality associated with specific variants and to predict any alteration of efficacy of possible drug or vaccine due to target proteins modification of the virus ,26. In w protein . Of noteelements . AlteratIn the protein-coding regions, significant substitution point mutation was detected in Bangladeshi isolates. At ORF1ab (266\u201321555) regions, 1163 (A\u2192T), 3037 (C\u2192T) and 14408 (C\u2192T) were frequent in Bangladeshi isolates. Substitution of isoleucine with phenylalanine at NSP2 120 (I\u2192F) was detected in this study . AnotherSubstitution mutation at 25609 (G\u2192T) of ORF3a was detected in three of the seven isolates. Mutation at ORF3a is most frequent in Europe followed by Asia, Oceania and North America, respectively ,24. AlteSpecific mutations, both deletion and substitution, in Bangladeshi coronavirus isolates were detected at various multiple sites in the genome and in the peptide chain. Of note, several new mutations at ORF1ab regions, specifically in the RdRp and its accessory proteins, will allow the virus to multiply without proof reading that will increase the possibility of accumulating more new mutations in the genome. Furthermore, along with a previous mutation associated with high case fatality, a new mutation at spike protein was also detected that increases the virus\u2019 chance of escaping antibodies or drugs targeting the spike protein. To the best of our knowledge, this is one of the first studies to report phylogenetic and genomic analysis of the first seven sequenced novel coronavirus in Bangladesh. This study reported significant new mutations at important sites in the novel coronavirus genome and antigenic peptide regions. These mutations will affect virus replication strategy and antigenic properties that will ultimately change the virus capability to infect and help the virus to escape from antibodies and drugs. This study will be the baseline database of coronavirus genome analysis that will help to predict effective vaccine and drug targets of coronavirus.In a pandemic like COVID-19, whole genome analysis of the pathogen is important in order to understand the transmission and severity of the disease accurately. With limited resources, the number of whole genome analysis in Bangladesh is lower than in\u00a0other developing countries. This study investigated the total mutation, phylogeny and evolution of the first seven whole genomes of SARS-CoV-2 in Bangladesh. They were closely related with each other and isolates from Germany, the USA, Saudi Arabia, France, Greece and India. Acquisition of unique mutations along with common mutations throughout the genome suggested rapid change of the circulating strains in Bangladesh. This study will provide a baseline to whole genome research of novel coronaviruses in Bangladesh.Approximately 57% (four of seven) severe acute respiratory coronavirus-2 (SARS-CoV-2) isolates were detected in male and 43% (three of seven) in female patients.Approximately 100% Bangladeshi SARS-CoV-2 shared 99.98\u2013100% sequence similarity with isolates of Germany, Sweden, the USA, Saudi Arabia, England, Myanmar and India.Deletion of bases at 5\u2032 untranslated region and 3\u2032 untranslated region in Bangladeshi isolates was specified.New substitution 1109 (F\u2192L) with deadly substitution 614 (D\u2192G) at spike protein was detected.Mutations at RNA-dependent RNA polymerase regions may lead to accumulation of random mutations in the novel coronavirus genome in future."} +{"text": "Dear Editor,Cohesin is a multiprotein complex that not only is essential for cell division but also has key roles in genome organization that underpin its gene regulatory function. Recurrent mutations of genes encoding cohesin subunits occur in myeloid malignancies at 10%\u201312% , and theRUNX1 and ERG genes (RUNX1 and ERG are precociously transcribed in response to phorbol 12-myristate 13-acetate (PMA)-induced megakaryocytic differentiation.Cohesin depletion was previously shown to alter chromatin accessibility and transcription of the RG genes , which eRG genes and founASTAG2-null and BSTAG2-null) defined for K562, CD34+ primary cord blood cells, and CD14+ monocytes , which is only lowly expressed in K562 cells (STAG2-null clones while mRNA was reduced dramatically following 6\u00a0h of treatment (STAG2-null cells (RUNX1/ERG transcription and reduce leukaemic stem cell-associated KIT expression in STAG2 mutant cells.62 cells . Followireatment . Howeverll cells , implyinRUNX1 and ERG, which causes aberrant enhancer-amplified transcription in response to differentiation signals. We show that enhancer suppression using BET inhibitor JQ1 prevents aberrant RUNX1 and ERG signal-induced transcription in STAG2 mutant cells and reduces leukaemic stem cell characteristics of STAG2 mutants.Overall, our results suggest that cohesin-STAG2 depletion de-constrains the chromatin surrounding Antony_et_al_supplementary_material_mjz114Click here for additional data file.Supplementary_Data1_Differential_genes_STAG2nullversus_WT_mjz114Click here for additional data file.Supplementary_Data2_Array_CGH_mjz114Click here for additional data file.Supplementary_Data3_ATAC-seq_differential_STAG2nullA_versus_WT_mjz114Click here for additional data file.Supplementary_Data4_Superenhancers_mjz114Click here for additional data file."} +{"text": "The malignancy potential of the laryngeal lesions are one of the major concerns of the surgeons about choosing the treatment options, forming surgical margins, deciding the follow-up periods. Finding a biomarker to overcome these concerns are ongoing challenges and recently microRNAs (miRNAs) are attributed as possible candidates since they can regulate gene expressions in the human genome. The objective of our study was to investigate their capability as a transformation biomarker for malignant laryngeal lesions.n\u2009=\u200910 in each). miRNA profiling was carried out by quantitative Real-Time polymerase chain reaction (RT-qPCR) and data were analyzed according to fold regulation.We investigated mature miRNA expressions in paraffin-embedded surgical specimens of human laryngeal tissues grouped as benign, premalignant or malignant , 2.72 (p\u2009=\u20090.028) and 3.01 (p\u2009=\u20090.022) fold upregulated respectively in premalignant lesions compared to the benign lesions. Moreover, their expressions were approximately 2.76 fold higher in the malignant group than in the premalignant group compared to the benign group. Besides them, significant 7.57 (p\u2009=\u20090.036), 4.45 (p\u2009=\u20090.045) and 5.98 (p\u2009=\u20090.023) fold upregulations of Hs_miR-21_5p, Hs_miR-218_3p, and Hs_miR-210_3p were noticed in the malignant group but not in the premalignant group when compared to the benign group, respectively.Our results demonstrated that 9 miRNAs were upregulated as the lesions become more malignant. Among them Hs_miR-183_5p, Hs_miR-155_5p, and Hs_miR-106b_3p might be followed as transformation marker, whereas Hs_miR-21_5p, Hs_miR-218_3p, and Hs_miR-210_3p might be a biomarker prone to malignancy.MiRNAs might have important value to help the clinicians for their concerns about the malignancy potentials of the laryngeal lesions. Laryngeal carcinoma is one of the most common malignancies in the head and neck region with a good prognosis when in the early stages . However, has reported significant upregulation and downregulation of 17 (including miR-21) and 9 miRNAs in laryngeal squamous cell carcinoma, particularly [MicroRNAs (miRNA) are endogenous, small, non-coding RNAs that are 21\u201324 nucleotides in length and known to regulate gene expression by silencing target transcripts via complement base-pairing in various pathways including embryogenesis, development, differentiation, and apoptosis . miRNAs icularly . The givHs_miR-21_5p, Hs_miR-106b_3p, Hs_miR-375_5p, Hs_miR-155_5p, Hs_let7a_5p, Hs_miR-210_3p, Hs_miR-425_3p, Hs_miR-183_5p, and Hs_miR-218_3p expressions were investigated within paraffin-embedded specimens that were obtained from patients who underwent surgery because of laryngeal lesions between the years 2012 and 2015. After local ethical committee approval (2014/0196), informed consent was obtained from the patients. The samples were analyzed within 3 groups as benign , premalignant and malignant . Patients who had previous cancer history or had been treated with radiochemotherapy were all excluded. One expert pathologist reviewed 5-\u03bcm sections containing the lesions of interest from the FFPE samples. The sections were transferred to the genetics laboratory in suitable conditions.Here, miRNAs were isolated from the paraffin-embedded tissues of the patients using the miRNeasy FFPE Kit (Qiagen) according to the manufacturer\u2019s protocol. Briefly, xylene and ethanol (%96\u2013100) were used to remove paraffin before total RNA isolation. Next, the pellet was treated with 10\u2009\u03bcl of proteinase K. The fully deparaffinized and lysed laryngeal tumor supernatant was transferred to a new microcentrifuge tube and DNase I was added to eliminate the DNA content. Then, RNeasy MinElute Spin Column (Qiagen) and relevant buffers of the kit were used with subsequent centrifugation and flow-through steps at 8000\u2009g for 15\u2009s. Finally, miRNA-enriched total RNA was eluted with 14\u2009\u03bcl of RNase-free water.cDNAs were randomly primed from 5\u2009\u03bcg of miRNA-enriched total RNA with miScript II Reverse Transcription (Rt) Kit (Qiagen). Briefly, reverse transcription PCR was performed with 4\u2009\u03bcl of 5x miScript HiSpec Buffer, 2\u2009\u03bcl of 10x Nucleic Acid Mix, 1\u2009\u03bcl of miScript Reverse Transcriptase Mix, 8\u2009\u03bcl of RNase-free water and 5\u2009\u03bcl of template RNA, with a total of volume of 20\u2009\u03bcl. The Rt reaction was incubated at 37\u2009\u00b0C for 60\u2009min and 95\u2009\u00b0C for 5\u2009min. The cDNA was then diluted with 200\u2009\u03bcl of nuclease-free water for further use in real-time PCRs.Nine miRNAs covering a variety of miRNA sequences were selected, and mature miRNA expression was determined via quantitative real-time polymerase chain reaction (RT-qPCR) with a QuantiTech SYBR Green PCR Kit (Qiagen) on the Rotor-Gene\u00ae Q instrument using the 2.1.0.9 software. RT-PCR was performed twice for each biological cDNA samples after optimization, including negative and non-template controls.R2 value of 0.9963. The slope of the standard curve was determined to be \u2212\u20093.410, and R2\u2009=\u20090.99630. CT values were exported from the RT-PCR instrument after normalization via the \u2018Dynamic Tube\u2019 and \u2018Slope Correction\u2019 options of the RT-PCR software used. For determining fold change, samples were normalized using housekeeping genes SNORD68, SNORD95 and MIRTC relevant for miRNA studies. Global mean normalization was used for Ct values and calculated concentrations were exported into the Excel spreadsheet, and the average value of duplicate Ct values was converted to quantities for analysis. The quality of expended mature miRNAs was checked via melt curve analyses using SYBR Green. Then, the Ct data were analyzed according to the fold-change (2(\u2212\u0394\u0394CT)) method and converted into fold regulation values using the online miScript miRNA PCR Array Data Analysis Tool (www.qiagen.com). p values under 0.05 were considered statistically significant.The threshold was manually determined as 0.025 in all reactions, and standards were calculated as follows: conc\u2009=\u200910\u02c6 (\u2212\u20090.293*CT\u2009+\u20097.516); and CT values were calculated as follows: CT\u2009=\u2009\u2212\u20093.410*log (conc)\u2009+\u200925.632, with an Patient demographic data were shown in Table\u00a0Fold regulations of the miRNA expressions in premalignant, malignant laryngeal lesions compared to benign lesions were given in Table\u00a0Hs_miR-375_5p, Hs_miR-183_5p, Hs_miR-155_5p, Hs_let-7a_5p, Hs_miR-21_5p and Hs_miR-106b_3p expressions were upregulated; however, in comparison to benign group, only the overexpression of Hs_miR-183_5p, Hs_miR-155_5pandHs_miR-106b_3p were statistically significant. (p\u2009<\u20090.05). The most upregulated miRNA was Hs_miR-183_5p, whereas Hs_miR-155_5p expression represented the lowest fold regulation (less than 2 folds).In the premalignant group Hs_miR-375_5p, Hs_miR-183_5p, Hs_miR-155_5p, Hs_let-7a_5p, Hs_miR-425_3p, Hs_miR-21_5p, Hs_miR-218_3p, Hs_miR-210_3pandHs_miR-106b_3p were all upregulated in the malignant group compared to benign group. However, in addition to significantly upregulated miRNAs in the premalignant group , overexpressions of Hs_miR-218_3p, Hs_miR-210_3p and Hs_miR-21_5p were also statistically significant. (p\u2009<\u20090.05). The most upregulated miRNA was Hs_miR-183_5p, whereas Hs_miR-218_3p was the lowest.Hs_miR-183_5p, Hs_miR-155_5p, and Hs_miR-106b_3p expressions were statistically significant in both premalignant and malignant groups compared to the benign group, whereas Hs_miR-21_5p, Hs_miR-218_3p, and Hs_miR-210_3p expressions were statistically significant only in the malignant group.In other words, Hs_miR-425_3p whereas Hs_miR-106b_3p was the lowest was identified in Caenorhabditis elegans; to date, thousands of miRNAs have been discovered and many more remain unknown [The importance of miRNAs in gene regulation has emerged in recent years. After the first miRNA unknown \u201312. The unknown \u201316. HereHs_miR-106b_3p is another miRNA that targets the 3\u2019UTR of retinoblastoma (RB) gene [RB largely inhibited by miR-106b which further results in the induction of laryngeal carcinoma cells proliferation by deceased control of RB on G1/S transition of the cell cycle [Hs_miR-106b_3p downregulation [Hs_miR-106b_3p and lymph node metastasis, cancer stage in supraglottic laryngeal carcinoma [Hs_miR-106b_3p expressions both in the premalignant and malignant groups, respectively gene and uprell cycle . Though gulation , Sun et arcinoma . In our ly Table . However, Hs_miR-155_5p was also another miRNA that significant 2.72 and 7.75 fold increased expressions were noticed in the premalignant and malignant group compared to the benign group, respectively . Downregulation of the latter has found to induce apoptosis and inhibit cellular proliferation and migration of laryngeal squamous carcinoma cells [EGR1 and PTEN [egulated . There anificant . In our ma cells . Moreoveand PTEN . Hence, , Hs_miR-21_5p upregulations were supposed to be oncogenic for different kinds of tumors [Hs_miR-21_5p would be a prognostic marker in head and neck tumors that is relevant to our results in which the Hs_miR-21_5p was one of the three miRNAs that significantly upregulated solely in the malignant laryngeal tumors group , survival and growth of tumor cells were reduced, and apoptosis was induced [Hs_miR-21_5p [Hs_miR-21_5p expression and 5-year survival rates in the head and neck carcinomas [Hs_miR-21_5p was an oncomir which is overexpressed in laryngeal carcinomas compared to adjacent normal laryngeal tissue [Hs_miR-21_5p expression was 3.45-fold and 7.57-fold increased in the premalignant and malignant group, respectively , a target gene of Hs_miR-218_3p, inhibited migration and invasion in tumor cells [Hs_miR-218_3p could clarify the mechanism of local recurrence and distant metastasis [miR-218 in the HPV-positive group [Hs_miR-218_3p by the E6 oncogene could cause overexpression of LAMB3, which is a target of Hs_miR-218_3p [miR-218 regulated TFF1. Likewise, overexpression of Hs_miR-218_3p in the malignant group might negatively regulate TFF1 in an Erk1/2-dependent manner and promote malignancy as suggested [Hs_miR-218_3p in laryngeal carcinomas. Thus, we believe that unknown factors such as viral infections can affect the results, which should be investigated in larger series.The next miRNA which was significantly overexpressed only in the metastatic group was the or cells . Additiotastasis . Besidesve group . In anotR-218_3p . In oppouggested . HoweverHs_miR-210_3p whether or not it is a tumor suppressor or an oncomir is still ongoing for many tumor types. Apart from that, Gee et al. indicated a correlation between the upregulation of Hs_miR-210_3p and poor prognosis in head and neck tumors by helping the vitality of tumor cells in hypoxic conditions [Hs_miR-210_3p expression was significantly 7.31 fold increased in the malignant group [Hs_miR-375_5p overexpression inhibited cell proliferation, migration, invasion and resulted in increased apoptosis via IGF1R expression [miR-375, increasing levels of miR-375 expression could provide a significant reduction in IGF1R levels and its downstream signaling molecule AKT in laryngeal carcinoma cells [PDK-1 as the target of Hs_miR-375_5p that contributed to AKT activation [Hs_miR-375_5p expression and indicated that the ratio of miR-21/miR-375 had a 94% sensitivity and 94% specificity for distinguishing normal tissue from laryngeal carcinoma tissue [On the other hand, k tumors , 46. Ovell cycle . Also, ipression . Since tma cells . Quaamartivation . Despitea tissue .Hs_miR-106b_3p and Hs_miR-21_5p, while downregulation of Hs_miR-375_5p expression in laryngeal carcinoma tissue compared to adjacent normal tissue [Hs_miR-106b_3p and Hs_miR-21_5p expressions in poor and moderately differentiated laryngeal carcinomas were more upregulated than that in the benign and dysplastic laryngeal tissues [Hs_miR-106b_3p and Hs_miR-21_5p were consistent with our data the downregulation of Hs_miR-375_5p expression particularly in advanced stages than in earlier stages was inconsistent [Apart from that, Yu et al. discovered overexpression of l tissue . Further tissues . Despitensistent . Even thHs_miR-425_3p served as an oncomir and stimulated cell proliferation and inhibited apoptosis [Hs_miR-425_3p upregulation and lymph node metastasis in laryngeal carcinomas [Hs_miR-425_3p expressions were insignificantly increased (Table Hs_let-7a_5p expression was insignificantly 4.12-fold and 10.54-fold upregulated in premalignant and malignant laryngeal tissues when compared to benign laryngeal samples (Table let-7a expression was significantly downregulated in laryngeal squamous cell carcinomas compared to adjacent normal tissues and was significantly further decreased in non-differentiated carcinoma tissues compared with moderately and well-differentiated ones [let-7a expression was insignificantly further upregulated as the tissues became more malignant. (Table let-7a compared with adjacent normal tissues [let-7a could have different impacts on different individuals or that let-7a may not take part in the pathogenesis of all laryngeal carcinomas [Hs_miR-375_5p, Hs_miR-425_3p, and Hs_let-7a_5p.In addition to this, poptosis . Furtherrcinomas . Notwithted ones . In cont tissues . They surcinomas . In the Hs_miR-21_5p, Hs_miR-218_3p, and Hs_miR-210_3p can be a potential biomarker for malignant laryngeal carcinomas. Furthermore, Hs_miR-183_5p, Hs_miR-155_5p, and Hs_miR-106b_3p, each upregulated both in premalignant and malignant groups compared to benign hyperplasia, might have a great value to help physicians to determine the malignancy potential of the laryngeal lesions as the transformation biomarkers upon prognosis.Our study is one of the first to compare the expression levels of several different miRNAs between benign, premalignant and malignant laryngeal lesions with a relatively larger series upon the literature. They indicated that"} +{"text": "HemaSphere. 2020;4:e407), the author made grammar- and style-based corrections to the text as well as to Figure 1. These adjustments have been made and do not affect the outcome of this publication.Since the publication of the article entitled \u201cCytokine Profiling as a Novel Complementary Tool to Predict Prognosis in MPNs?\u201d (https://journals.lww.com/hemasphere/Fulltext/2020/06000/Cytokine_Profiling_as_a_Novel_Complementary_Tool.17.aspxThese additions have been made online:"} +{"text": "Aedes (Tanakius) togoi (Diptera: Culicidae), is found in coastal east Asia in climates ranging from subtropical to subarctic. However, a disjunct population in the Pacific Northwest of North America has an ambiguous heritage. Two potential models explain the presence of Ae. togoi in North America: ancient Beringian dispersal or modern anthropogenic introduction. Genetic studies have thus far proved inconclusive. Here we described the putative ancient distribution of Ae. togoi habitat in east Asia and examined the climatic feasibility of a Beringian introduction into North America using modern distribution records and ecological niche modeling of bioclimatic data from the last interglacial period , the last glacial maximum , and the mid-Holocene (~6000 BP). Our results suggest that suitable climatic conditions existed for Ae. togoi to arrive in North America through natural dispersal as well as to persist there until present times. Furthermore, we find that ancient distributions of suitable Ae. togoi habitat in east Asia may explain the genetic relationships between Ae. togoi populations identified in other studies. These findings indicate the utility of ecological niche modeling as a complementary tool for studying insect phylogeography.The coastal rock pool mosquito, Aedes (Tanakius) togoi , breeds in pools of brackish or salt-water above the high tide level on rocky shorelines and, sporadically, in containers of freshwater further inland and northern Washington, United States dorsalis larvae from the supralittoral coastal rock pools at Caulfield Cove in North Vancouver, BC . Thus, it is possible that Ae. togoi could have gone unnoticed in North America until sometime in the early-to-mid 20th century. in 1907 and firs in 1907 . However in 1907 . Furtheruver, BC has beene. togoi . Aedes d marshes while thAe. togoi: a lineage from temperate and subarctic regions of Japan, China, Taiwan, and other parts of Southeast Asia, a lineage from the subtropical islands of Japan, a lineage from subarctic Japan, and a lineage from Canada . Some data points did not occur on pixels within bioclimatic data due to the irregular shape of coastline or the small size of islands. When this occurred, we relocated data points to the closest 1 km2 raster cell.In the present study, we compiled 86 Asian records and 49 North American records of udy area , personaAe. togoi in Asia, and one for the North Pacific from within Latitude 0\u00b0N to 80\u00b0N and Longitude 95\u00b0E across the Pacific to 100\u00b0W, encompassing all known Ae. togoi populations in Asia and North America and possible unknown populations in the North Pacific. We projected a model from the Asian study area (hereafter referred to as model 1), obtained using Asian Ae. togoi populations, onto the North Pacific study area, following methodology for projecting habitat of invasive species into new areas (Ae. togoi populations (hereafter referred to as model 2), as if it were indigenous. We used WGS84 Mercator projection for maps in the last glacial maximum, and WGS84/PDC Mercator projection for all other maps.We selected two study areas: one from Asia from within Latitude 0\u00b0N to 49\u00b0N and Longitude 94\u00b0E to 157\u00b0E . We also downloaded corresponding sets of variables based on climate conditions from: 1) the last interglacial period at a scale of 1 km2 (2 (2 (Ae. togoi overwintering strategy (Ae. togoi observations (model 1) (Ae. togoi observations (model 2) the 1 km2 m2 and all model 2) , each wiAe. togoi. We used default Maxent settings with and without linear features, and with and without quadratic features, with 1,000 replications, each with 2,000 training iterations, 20% of data points withheld for subsampling, clamping applied, and 10,000 background points. We analyzed results based on extensive presence of predicted suitable habitat on maps calculated from the maximum sensitivity and specificity, a recommended approach for transforming the results of species distribution models into binary presence/absence predictions (PO) of the receiving operator characteristic (ROC). This integral gives a sum between 0 and 1 with a result >0.5 indicative of a model that is more accurate than random, >0.7 representing a useful model, >0.9 representing an excellent model, and a result of 1 indicating a perfect fit (PO of our three best candidate modes as measured by AICc and selected the top performer across both metrics (PO were transformed to a percentage and used as a relative metric of variable importance. We built maps with QGIS version 3.4.3 (https://crc806db.uni-koeln.de/) to add glacial coverage layers where applicable.Maximum entropy niche modeling (Maxent) is a common approach for species habitat modeling . Maxent dictions , as wellfect fit . We comp metrics and 3. Won 3.4.3 , and dowon 3.4.3 from thePO of 0.912 (\u00b10.031) and model 2 has a mean AUCPO of 0.972 (\u00b10.009), both representing models with excellent fits . Consistent with a niche conservatism hypothesis .Mean diurnal range and minimum temperature of the coldest month were important environmental variables in each of our models. In addition, precipitation during the coldest quarter was also critical in model 1, while mean temperature during the warmest quarter was important in model 2. There is evidence that, under certain conditions, temperature fluctuations (rather than absolute temperature) can have negative effects on the development rate and survival of some mosquitoes . With thdes spp. , and ano America . Such pa America and over America . Insulat America and coul dry out , and it survival . This vades spp. . Despitedes spp. , the meades spp. indicatepothesis , this ovAe. togoi into North America can also produce slightly different projections, indicating the inherent uncertainty present in any individual approach , Cook Inlet, the Alexander Archipelago, and the North Coast of British Columbia. We also note a need for Ae. togoi surveillance across its novel predicted range in more heavily surveyed areas, including the Hawaiian Islands and the Mariana Archipelago. To further investigate the origins of Ae. togoi in North America, we propose a combination of survey efforts and population genetic analyses based on mitochondrial and nuclear genome sequencing (Ultimately, our modeling is based on climate data alone and is subject to the realities of limited sampling and, thus, cannot definitely prove or disprove either an anthropogenic introduction or Beringian dispersal for the presence of approach . In futuapproach . Howeverapproach . We propquencing .ieaa035_suppl_Supplementary_Figure_1Click here for additional data file.ieaa035_suppl_Supplementary_Figure_2Click here for additional data file.ieaa035_suppl_Supplementary_Figure_3Click here for additional data file.ieaa035_suppl_Supplementary_Figure_4Click here for additional data file.ieaa035_suppl_Supplementary_Figure_5Click here for additional data file.ieaa035_suppl_Supplementary_Figure_6Click here for additional data file.ieaa035_suppl_Supplementary_Figure_7Click here for additional data file.ieaa035_suppl_Supplementary_Figure_8Click here for additional data file.ieaa035_suppl_Supplementary_Figure_9Click here for additional data file.ieaa035_suppl_Supplementary_Figure_10Click here for additional data file.ieaa035_suppl_Supplementary_Figure_11Click here for additional data file.ieaa035_suppl_Supplementary_Figure_12Click here for additional data file.ieaa035_suppl_Supplementary_Figure_13Click here for additional data file.ieaa035_suppl_Supplementary_Figure_14Click here for additional data file.ieaa035_suppl_Supplementary_Figure_15Click here for additional data file.ieaa035_suppl_Supplementary_Table_1Click here for additional data file.ieaa035_suppl_Supplementary_Table_2Click here for additional data file.ieaa035_suppl_Supplementary_Table_3Click here for additional data file.ieaa035_suppl_Supplementary_Table_4Click here for additional data file.ieaa035_suppl_Supplementary_Table_5Click here for additional data file.ieaa035_suppl_Supplementary_FilesClick here for additional data file."} +{"text": "Capnocytophaga and OTU269_Treponema acted as gatekeepers for both of the two clustered microbiotas. Nine OTUs assigned to seven taxa, i.e., Alloprevotella, Atopobium, Megasphaera, Oribacterium, Prevotella, Stomatobaculum, and Veillonella, were associated with both H7N9 patients with and without secondary bacterial lung infection in Cluster_1. In addition, two groups of healthy cohorts may have potential different susceptibilities to H7N9 infection. These findings suggest that two OP microbial colonization states of H7N9 patients were at different dysbiosis states, which may help determine the health status of H7N9 patients, as well as the susceptibility of healthy subjects to H7N9 infection.The dysbiosis of oropharyngeal (OP) microbiota is associated with multiple diseases, including H7N9 infection. Different OP microbial colonization states may reflect different severities or stages of disease and affect the effectiveness of the treatments. Current study aims to determine the vital bacteria that could possibly drive the OP microbiota in the H7N9 patients to more severe microbial dysbiosis state. The OP microbiotas of 42 H7N9 patients and 30 healthy subjects were analyzed by a series of bioinformatics and statistical analyses. Two clusters of OP microbiotas in H7N9 patients, i.e., Cluster_1_Diseased and Cluster_2_Diseased, were determined at two microbial colonization states by Partition Around Medoids (PAM) clustering analysis, each characterized by distinct operational taxonomic units (OTUs) and functional metabolites. Cluster_1_Diseased was determined at more severe dysbiosis status compared with Cluster_2_Diseased, while OTU143_ Avian influenza has caused great mortalities to human beings and animals during the last two decades \u20134. The hAcinetobacter baumanii, Candida albicans, Flavobacterium indologenes, Klebsiella, Pseudomonas, and Staphylococcus have been isolated from blood, white blood cell, or sputum of H7N9 patients with SBLI (H7N9_SBLI), whereas no bacterium was isolated from those in the H7N9 patients without SBLI (H7N9_NSI) is a usual condition in the H7N9 patients (7N9_NSI) .Atopobium, Eubacterium, Leptotrichia, Oribacterium, Rothia, Solobacterium, and Streptococcus in H7N9_SBLI than H7N9_NSI clustering analysis in order to determine their microbial colonization states. Before PAM clustering, the average silhouette method was used to determine the optimal numbers of clusters for all the OP microbiotas .2 test was performed to compare the numbers of healthy and H7N9 cohorts in the two clusters.Two clusters of OP microbiotas were determined in healthy cohorts and H7N9 cohorts . A Pearson \u03c7t test.Permutation analysis of variance (PERMANOVA) was performed in R software version 3.6.1 with the vegan package to deterLinear discriminant analysis (LDA) effect size (LEfSe) was performed using Kruskal\u2013Wallis test (\u03b1 < 0.05), followed by a Wilcoxon rank-sum test (\u03b1 < 0.05), and a one-against-all strategy for multiclass analysis . It was P values were computed by a permutation step, followed by a bootstrap procedure to merge the P values into one final P value using a method by Brown (Co-occurrence Network (CoNet) analysis was carried out to investigate the co-occurrence and coexclusion of OTUs in Cluster_1_Diseased and Cluster_2_Diseased and to determine the top 10 OTUs with most correlations in the OP bacterial networks of Cluster_1_Diseased and Cluster_2_Diseased. The detailed processes followed the procedures described by Wagner Mackenzie et al. . Brieflyby Brown .Gatekeepers were defined as the phylotypes interacting with different parts of the network to hold together the bacterial community , 27. In The OTUs differentiating the OP microbiotas of H7N9 patients from those of healthy subjects were determined by using the Galaxy implementation of LEfSe run by Huttenhower laboratory .t test. The same data transformation and statistical approach were applied for the comparison of the AIDRs of Cluster_1_Diseased and Cluster_2_Diseased.Dysbiosis ratios of bacterial taxa were associated with different diseases and conditions , 29. In U test was used to compare the abundances of the OTUs associated with H7N9 between Cluster_1_Diseased and Cluster_2_Diseased. The same approach was performed for the comparisons of OTUs associated with healthy cohort between Cluster_1_Diseased and Cluster_2_Diseased. A Pearson \u03c72 test was applied to compare the numbers of OTUs that were associated with H7N9 and more abundant in Cluster_1_Diseased or Cluster_2_Diseased. The same test was carried out for the comparisons of the numbers of OTUs, which were associated with healthy cohort and more abundant in Cluster_1_Diseased or Cluster_2_Diseased.A series of statistical analyses were also performed to help determine the dysbiosis status of the two clusters of OP microbiotas in H7N9 patients. Mann\u2013Whitney The two clustered OP microbiotas in H7N9_NSI (or H7N9_SBLI) were compared to determine whether some OTUs could possibly drive the OP microbiotas to worse microbial dysbiosis state in both H7N9_NSI and H7N9_SBLI.t test, after being transformed in log10 to satisfy the assumptions of normal distribution and equal variance. The same approaches were applied for the comparisons of the AIDRs of H7N9_SBLI cohorts in the two clusters.Avian influenza dysbiosis ratios of H7N9_NSI cohorts in the two clusters were compared by a An LEfSe analysis was applied to determine the OTUs differentiating the two clustered OP microbiotas in H7N9_NSI . The same approach was carried out for determining the OTUs associated with each of the two clustered OP microbiotas in H7N9_SBLI .Similarly, an LEfSe analysis was used to determine the functional metabolites associated with Cluster_1_H7N9_NSI or Cluster_2_H7N9_NSI. The same analysis was used to determine the functional metabolites associated with Cluster_1_H7N9_SBLI or Cluster_2_H7N9_SBLI.The OTUs associated with both Cluster_1_H7N9_NSI and Cluster_1_H7N9_SBLI were determined by an online program Venny diagram version 2.1 . The samt test.Permutation analysis of variance was applied to determine the difference between Cluster_1_Healthy and Cluster_2_Healthy. Avian influenza dysbiosis ratios of Cluster_1_Healthy and Cluster_2_Healthy were transformed in log10 to satisfy the assumptions of normal distribution and equal variance, before being compared by a An LEfSe analysis was applied to determine the OTUs associated with each of the two clustered OP microbiotas in healthy cohorts. The same approach was used for identifying the functional metabolites associated with Cluster_1_Healthy or Cluster_2_Healthy.2 = 9.933, P = 0.002).Silhouette analysis identified two as the most optimal number for clustering all the 72 OP microbiotas . TherefoBacteroidetes, Fusobacteria, and Proteobacteria were more abundant in Cluster_2_Diseased compared with Cluster_1_Diseased, whereas Firmicutes and Saccharibacteria were more abundant in Cluster_1_Diseased than in Cluster_2_Diseased. Nine most abundant orders in the H7N9 patients constituted >90% abundance of all the orders in H7N9 microbiotas, among which Campylobacterales, Clostridiales, Flavobacteriales, Lactobacillales, and Selenomonadales had greater abundances in Cluster_1_Diseased compared with Cluster_2_Diseased, whereas Bacteroidales, Fusobacteriales, Neisseriales, and Pasteurellales were more abundant in Cluster_2_Diseased than in Cluster_1_Diseased.The five most abundant phyla in the OP microbiotas of H7N9 patients accounted for >90% abundance of the OP microbiotas. Among them, R2 = 0.090, P < 0.001). The dissimilarity between Cluster_1_Diseased and Cluster_2_Diseased was relatively high (SIMPER dissimilarity = 64.8%) according to SIMPER results. The similarity within Cluster_1_Diseased (SIMPER average similarity = 38.2%) was lower than that within Cluster_2_Diseased (SIMPER average similarity = 44.1%). Both richness and diversity were similar in Cluster_1_Diseased and Cluster_2_Diseased , but the evenness was significantly higher in Cluster_1_Diseased than in Cluster_2_Diseased .Neisseria) with the largest LDA scores were closely associated with Cluster_1_Diseased and Cluster_2_Diseased, respectively , suggesting a lower AIDR was likely to indicate the dysbiosis of OP microbiotas in H7N9 patients compared with healthy subjects. The AIDR was significantly higher in Cluster_2_Diseased (5.28 \u00b1 1.12 SE) than Cluster_1_Diseased (1.48 \u00b1 0.21 SE) .The LEfSe results showed that 69 OTUs were associated with H7N9, and 22 OTUs were associated with healthy cohort, which were used for the calculations and comparisons of AIDRs in different groups . The heaP < 0.05) .Eleven (of 69) OTUs associated with H7N9 were determined with significantly different abundances between Cluster_1_Dieased and Cluster_2_Diseased . Among tP < 0.02) .Likewise, seven (of 22) OTUs associated with healthy cohort were determined with significantly different abundances between the two clustered H7N9 microbiotas . Among tThese above results consistently suggested that Cluster_2_Diseased were at better dysbiosis status compared with Cluster_1_Diseased.The two bacterial networks of Cluster_1_Diseased and Cluster_2_Diseased were determined by CoNet analysis . None ofCapnocytophaga and OTU269_Treponema were determined as gatekeepers for both Cluster_1_Diseased and Cluster_2_Diseased.Fragmentation results demonstrated a lower fragmentation score in Cluster_2_Diseased than that of Cluster_1_Diseased , suggesting that Cluster_2_Diseased had stronger co-occurrence patterns and greater biotic interactions than Cluster_1_Diseased. A group of nine OTUs was determined to be gatekeepers of Cluster_1_Diseased, and another group of nine OTUs was identified as gatekeepers of Cluster_2_Diseased . OTU143_The two clustered OP microbiotas in H7N9_NSI or H7N9_SBLI were compared to determine the bacteria that could possibly drive the OP microbiotas to more severe microbial dysbiosis state in both H7N9_NSI and H7N9_SBLI.t test, P = 0.001). Likewise, Cluster_2_H7N9_SBLI (4.44 \u00b1 1.76 SE) had a significantly higher AIDR compared with Cluster_1_H7N9_SBLI (1.53 \u00b1 0.30 SE) .Avian influenza dysbiosis ratio was significantly higher in Cluster_2_H7N9_NSI (5.98 \u00b1 1.51 SE) compared with Cluster_1_H7N9_NSI (1.43 \u00b1 0.31 SE) (LEfSe results showed that 33 OTUs were associated with OP microbiotas of H7N9_NSI in Cluster_1 (Cluster_1_H7N9_NSI), and 20 OTUs were associated with OP microbiotas of H7N9_NSI in Cluster_2 (Cluster_2_H7N9_NSI) . A totalLikewise, a total of 20 OTUs were more associated with Cluster_1_H7N9_SBLI, and 41 OTUs were more associated with Cluster_2_H7N9_SBLI . A totalAlloprevotella, Atopobium, Megasphaera, Oribacterium, Prevotella, Stomatobaculum, and Veillonella. Likewise, four OTUs assigned to Alloprevotella or Porphyromonas were associated with both Cluster_2_H7N9_NSI and Cluster_2_H7N9_SBLI associated with both Cluster_1_H7N9_NSI and Cluster_1_H7N9_SBLI , which w7N9_SBLI .In addition, five of the 20 OTUs associated with Cluster_2_H7N9_NSI were also identified being associated with healthy cohort. Likewise, five of 41 OTUs associated with Cluster_2_H7N9_SBLI were also determined being associated with healthy cohort.Dialister was negatively correlated with four other OTUs associated with Cluster_2_H7N9_SBLI . Avian influenza dysbiosis ratio was significantly higher in Cluster_2_Healthy (19.74 \u00b1 3.83 SE) than Cluster_1_Healthy (4.83 \u00b1 1.91 SE) .The PERMANOVA results showed a significant difference between the Cluster_1_Healthy and Cluster_2_Healthy were associated with Cluster_1_Diseased, whereas no Veillonella was associated with Cluster_2_Diseased, suggesting that the increased Veillonella in the OP microbiota could be a source of Veillonella in the gut of H7N9 patients with Cluster_1_Diseased.bacteria . In the controls . In the Capnocytophaga was part of resident OP microbiota in human and could be opportunistic pathogens of extraoral infections , suggesting these phylotypes were likely to cause more severe conditions in H7N9 patients. Some Alloprevotella and Porphyromonas species were determined as opportunistic oral pathogens (Alloprevotella, OTU20_Porphyromonas, OTU26_Porphyromonas, and OTU193_Alloprevotella were associated with both Cluster_2_H7N9_NSI and Cluster_2_H7N9_SBLI, suggesting the four phylotypes were more likely as opportunistic pathogens for inducing H7N9_NSI or H7N9_SBLI.H7N9_NSI . The curathogens \u201364. In tDialister species were associated with some oral diseases, such as periodontal disease, apical periodontitis, and dentinal caries (Dialister was negatively correlated with four OTUs in Cluster_2_H7N9_SBLI, suggesting this phylotype was more likely as pathogenic bacteria and had competitive interactions with some other phylotypes within this OP microbial colonization state. We acknowledge that further studies are needed to confirm it.A few l caries \u201367. AmonMethyl-accepting chemotaxis protein is vital to the cell survival, pathogenesis, and biodegradation . In the Veillonella was more abundant in the gut of H7N9 patients than healthy controls as described above (Veillonella and methyl-accepting chemotaxis protein were determined as the phylotype and functional metabolite most associated with Cluster_1_Healthy, suggesting they were more likely to enhance the greater susceptibility of the healthy subjects to H7N9 infection.Greater AIDR was determined in Cluster_2_Healthy than Cluster_1_Healthy, suggesting the healthy subjects in Cluster_2 could be more tolerant to H7N9 infection than those in Cluster_1. ed above , whereased above . In the Alloprevotella, Atopobium, Megasphaera, Oribacterium, Prevotella, Stomatobaculum, and Veillonella were likely to drive the OP microbiotas to worse microbial dysbiosis state in both H7N9_NSI and H7N9_SBLI. In addition, healthy subjects could have different susceptibilities to H7N9 infection.In conclusion, two OP microbial colonization states were determined in the H7N9 patients, each characterized by distinct phylotypes. Nine phylotypes assigned to PRJNA638222.The raw sequencing data were deposited in NCBI under BioProject accession no. The studies involving human participants were reviewed and approved by The Institutional Review Board and Ethics Committee of the First Affiliated Hospital of Zhejiang University. The patients/participants provided their written informed consent to participate in this study.HZha and LL designed the study. HL and HZhang collected samples and provided the raw sequencing data. HZha, JW, and KC contributed to the data analyses. HZha interpreted the results and drafted the manuscript. HZ, HL, and JW reviewed the manuscript. QW, JL, QL, and YL contributed to the literature search and participate in the study design. All authors approved the final manuscript.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "Supplementary Table 1 and Supplementary Table 2. The expression values given for PDK4 in Supplementary Table 1, ALL_M, CLL_M, AHL_M, CHL_M contrasts were \u22123.90808, 2.10011, \u22124.12057, and \u22124.12057 the correct values are \u22124.1009, 2.07292, \u22124.63904, and 3.05659 same value should appear at the \u201cMax_expression_level\u201d column in HL_node_table in Supplementary Table 2.In the original article, there was a mistake in MT4 given in Supplementary Table 1 for ALL_H, CLL_H, AHL_H, and CHL_H are \u22121.20147, 1.485881, \u22121.19557, and 1.0025, the correct values are \u22123.1675, \u22121.82983, \u22121.35669, and \u22121.84142. To reflect this change, columns \u201cMax_expression_level,\u201d \u201cMax_Tissue,\u201d and \u201cUp/Down\u201d on Supplementary Table 2, \u201cLL_node_table\u201d tab is corrected.Similarly the expression values of The authors apologize for this error and state that this does not change the scientific conclusions of the article in any way. The original article has been updated."} +{"text": "This paper presents a novel prototype platform that uses the same LaTeX mark-up language, commonly used to typeset mathematical content, as an input language for modeling optimization problems of various classes. The platform converts the LaTeX model into a formal Algebraic Modeling Language (AML) representation based on Pyomo through a parsing engine written in Python and solves by either via NEOS server or locally installed solvers, using a friendly Graphical User Interface (GUI). The distinct advantages of our approach can be summarized in (i)\u00a0simplification and speed-up of the model design and development process (ii) non-commercial character (iii)\u00a0cross-platform support (iv) easier typo and logic error detection in the description of the models and (v) minimization of working knowledge of programming and AMLs to perform mathematical programming modeling. Overall, this is a presentation of a complete workable scheme on using LaTeX for mathematical programming modeling which assists in furthering our ability to reproduce and replicate scientific work. Mathematical modeling constitutes a rigorous way of inexpensively simulating complex systems\u2019 behavior in order to gain further understanding about the underlying mechanisms and trade-offs. By exploiting mathematical modeling techniques, one may manipulate the system under analysis so as to guarantee its optimal and robust operation.http://www.Pyomo.org/)\u00a0(https://www.python.org/),\u00a0(The dominant computing tool to assist in modeling is the Algebraic Modeling Languages (AMLs)\u00a0. AMLs hamo.org/)\u00a0 for moden.org/),\u00a0 and AMPLn.org/),\u00a0 are the \u2022a strict and specific syntax for the mathematical notation to describe the models; \u2022understand in terms of structural demands;solver interfaces, the bridge between mathematics and what the solver can \u2022a series of available optimization solvers for as many classes of problems as supported with the associated functional interfaces implemented; \u2022explicit data file formats and implementation of the respective import/export mechanisms.vertical abstraction) or extend the embedded functionality . This limited declaration of model components elevates the amount of processing that the platform has to conduct in order to provide equivalent formulations of the input.AMLs provide a level of abstraction, which is higher than the direct approach of generating a model using a programming language. The different levels in the design process of a model are depicted in raction) . The layA systems approach, MOSAIC , has beeLiterate Programming and (ii) the notions of reproducible and replicable research, the fundamental basis of scientific analysis. Literate Programming focuses on generating programs based on logical flow and thinking rather than being limited by the imposing syntactical constraints of a programming language. In essence, we employ a simple mark-up language, LaTeX, to describe a problem and then in turn produce compilable code (Pyomo abstract model) which can be used outside of the presented prototype platform\u2019s framework. Reproducibility and the ability to replicate scientific analysis is crucial and challenging to achieve. As software tools become the vessel to unravel the computational complexity of decision-making, developing open-source software is not necessarily sufficient; the ability for the averagely versed developer to reproduce and replicate scientific work is very important to effectively deliver impact (https://www.coin-or.org/), Science evolves when previous results can be easily replicated.Our work expands upon two axes: (i) the programming paradigm introduced by Donald E. Knuth on Literr impact . To quotvertical abstraction. It therefore strengthens the ability to reproduce and replicate optimization models across literature for further analysis by reducing the demands in working knowledge of AMLs or coding. The key capability is that it parses LaTeX\u00a0formulations of mathematical programs (optimization problems) directly into Pyomo abstract models. The framework then combines the produced abstract model with data provided in the AMPL .dat format (containing parameters and sets) to produce a concrete model. This capability is provided through a graphical interface which accepts LaTeX\u00a0input and AMPL data files, parses a Pyomo model, solves with a selected solver , and returns the optimal solution if feasible, as the output. The aim is not to substitute but to establish a link between those using a higher level of abstraction. Therefore, the platform does not eliminate the use of an AML or the advantages emanating from it.In the endeavor of simplifying the syntactical requirements imposed by AMLs we have developed a prototype platform. This new framework is materializing a level of modeling design that is higher than the AMLs in terms of This is a complete prototype workable scheme to address how LaTeX\u00a0could be used as an input language to perform mathematical programming modeling, and currently supports Linear Programming (LP), Mixed-Integer Linear Programming (MILP) as well as Mixed-Integer Quadratic Programming (MIQP) formulations. Linear Optimization has provThis paper is organized as follows: in \u2018Functionality\u2019, we describe the current functionality supported by the platform at this prototype stage. In \u2018Parser - Execution Engine\u2019, we present the implementation details of the parser. \u2018An illustrative parsing example\u2019 provides a description of an illustrative example. A discussion follows in \u2018Discussion\u2019. Some concluding remarks are drawn in \u2018Conclusion\u2019. Examples of optimization models that were reproduced from scientific papers as well as their corresponding LaTeX\u00a0formulations and Pyomo models can be found in the The set of rules that are admissible to formulate models in this platform are formal LaTeX\u00a0commands and they do not represent in-house modifications. We assume that the model will be in the typical format that optimization programs commonly appear in scientific journals. Therefore, the model must contain the following three main parts and with respect to the correct order as well: 1.the objective function to be optimized (either maximized or minimized); 2.the (sets of) constraints, or else the relationships between the decision variables and coefficients, right-hand side (RHS); 3.the decision variables and their domain space.reads LaTeX\u00a0code and then writes Pyomo abstract models or the code generates code. The resulting .py file is usable outside of the platform\u2019s frame, thus not making the binding and usage of these two necessary after conversion. The main components that we employed for this purpose are the following:We used the programming environment of Python coupled with its modeling library, namely Pyomo. Similar approaches in terms of software selection have been presented for Differential and Algebraic Equations (DAE) modeling and optimization in . By comb \u2022https://www.mathjax.org/) and Google Polymer (https://www.polymer-project.org/);Front-end: HTML, JavaScript, MathJax ( \u2022https://www.djangoproject.com/) and Pyomo.Back-end: Python with Django (https://www.polymer-project.org/). As the main feature of the platform is to allow modeling in LaTeX\u00a0language, we used MathJax as the rendering engine. In this way, the user can see the compiled version of the input model. All of these components form a single suite that works across different computational environments. The front-end is plain but incorporates the necessary functionality for input and output, as well as some solver options. The role of the back-end is to establish the communication between the GUI and the parser with the functions therein. In this way the inputs are being processed inside Python in the background, and the user simply witnesses a seamless working environment without having to understand the black-box parser in detail.In order to increase the effectiveness and user-friendliness of the platform, a Graphical-User Interface (GUI) based on HTML, JavaScript (front-end) and Django as the web-framework (back-end) has been implemented, as shown in The main components of the GUI are: \u2022Abstract model input: The input of the LaTeX\u00a0model, either directly inside the Polymer input text-box or via file upload (a .tex containing the required source LaTeX\u00a0code) \u2022Data files: The input of the data set which follows the abstract definition of the model via uploading the AMPL-format (.dat) data file \u2022Solver options: An array of solver - related options such as: 1.NEOS server job using CPLEX 2.Solve the relaxed LP (if MILP) 3.Select GPLK (built-in) as the optimization solver 4.Select CPLEX (if available) as the optimization solver (currently set to default)The following is an example of a LaTeX\u00a0formulated optimization problem which is ready to use with the platform; the well-known Traveling Salesman Problem (TSP) : \\documeand the raw LaTeX\u00a0code used to generate this was: __________________________________________________________________________________________________________ \u2216\u00a0text\u00a0{\u00a0minimize\u00a0}\u00a0\u2216\u00a0sum\u00a0\u2216\u00a0l\u00a0i\u00a0m\u00a0i\u00a0t\u00a0s\u00a0\u00a0{\u00a0i\u00a0,\u00a0j\u00a0:\u00a0i\u00a0\u2216\u00a0neq\u00a0j\u00a0}\u02c6{}\u00a0\u00a0\u2216\u2216 \u2216\u00a0text\u00a0{\u00a0subject\u00a0to\u00a0:\u00a0}\u2216\u2216 \u2216\u00a0sum\u00a0\u2216\u00a0l\u00a0i\u00a0m\u00a0i\u00a0t\u00a0s\u00a0\u00a0{\u00a0j\u00a0:\u00a0i\u00a0\u2216\u00a0neq\u00a0j\u00a0}\u02c6{}\u00a0\u00a0=\u00a01\u00a0\u2216\u00a0quad\u00a0\u2216\u00a0quad\u00a0\u2216\u00a0f\u00a0o\u00a0r\u00a0a\u00a0l\u00a0l\u00a0i\u00a0\u2216\u2216 \u2216\u00a0sum\u00a0\u2216\u00a0l\u00a0i\u00a0m\u00a0i\u00a0t\u00a0s\u00a0\u00a0{\u00a0i\u00a0:\u00a0i\u00a0\u2216\u00a0neq\u00a0j\u00a0}\u02c6{}\u00a0\u00a0=\u00a01\u00a0\u2216\u00a0quad\u00a0\u2216\u00a0quad\u00a0\u2216\u00a0f\u00a0o\u00a0r\u00a0a\u00a0l\u00a0l\u00a0j\u00a0\u2216\u2216 u\u00a0{\u00a0i\u00a0}\u00a0\u2212\u00a0u\u00a0{\u00a0j\u00a0}\u00a0+\u00a0nx\u00a0{\u00a0i\u00a0,\u00a0j\u00a0}\u00a0\u2216\u00a0leq\u00a0n\u00a0\u2212\u00a01\u00a0\u2216\u00a0quad\u00a0\u2216\u00a0quad\u00a0\u2216\u00a0f\u00a0o\u00a0r\u00a0a\u00a0l\u00a0l\u00a0i\u00a0\u2216\u00a0geq\u00a02\u00a0,\u00a0j\u00a0\u2216\u00a0leq\u00a0|\u00a0j\u00a0|\u22121,\u00a0i \u00a0\u00a0\u00a0\u00a0\u2216neq\u00a0j\u00a0\u2216\u2216 u\u00a0\u2216\u00a0in\u00a0\u00a0\u2216\u00a0mathbb\u00a0Z\u00a0,\u00a0x\u00a0\u2216\u00a0in\u00a0\u00a0\u2216{0\u00a0,1\u2216}\u2216\u2216 ____________________________________________________________________________________________________________ pre-made .tex file which can be uploaded in the corresponding field of the GUI. Either way, the MathJax Engine then renders LaTeX appropriately so the user can see the resulting compiled model live. Subject to syntax-errors, the MathJax engine might or might not render the model eventually, as naturally expected. Empty lines or spaces do not play a role, as well as commented-out lines using the standard notation (the percentage symbol %). The model file always begins with the objective function sense, the function itself, and then the sets of constraints follow, with the variables and their respective type at the end of the file.which is the input for the platform. The user can either input this code directly inside the Google polymer text box or via a parser we define the part of the code (a collection of Python functions) in the back-end side of the platform which is responsible for translating the model written in LaTeX\u00a0to Pyomo, the modeling component of the Python programming language. In order to effectively translate the user model input from LaTeX, we need an array of programming functions to carry out the conversion consistently since preserving the equivalence of the two is implied. The aim of the implementation is to provide minimum loss of generality in the ability to express mathematical notation for different modeling needs.As A detailed description of the implemented scheme is given in .tex model file and the .dat AMPL formatted data file are given, the platform then starts processing the model. The conversion starts by reading the variables of the model and their respective types, and then follows with component identification (locating the occurrence of the variables in each constraint) and their inter-relationships . Additionally, any summation and constraint conditional indexing schemes will be processed separately. Constraint-by-constraint the parser gradually builds the .py Pyomo abstract model file. It then merges through Pyomo the model with its data set and calls the selected solver for optimization.Once the \\quad command). The platform also supports the use of Greek letters. For instance, if a parameter is declared as \u03b1 the platform identifies the symbol, removes the backslash and expects to find alpha in the data-file. This takes place also in the pre-processing stage.A significant amount of pre-processing takes place prior of parsing. The minimum and essential is to first tidy up the input; that is, clear empty lines and spaces, as well as reserved (by the platform) keywords that the user can include but do not play any role in functional parsing , no matter if the initial input was done using the frac environment.The user can also opt-out selectively the constraints by putting regular comments in LaTeX, with the insertion of the percentage symbol (%) in the beginning of each expression. Once done, we attempt to simplify some types of mathematical expressions in order to be able to better process them later on. More specifically, we have two main functions that handle fractions and common factor (distributive expressions) simplifications. For example: This keeps the basic component identification functions intact, since their input is transformed first to the acceptable analytical format. Instead of transforming the parsing functions, we transform the input in the acceptable format. However, the user does not lose either functionality or flexibility, as this takes place in the background. To put it simply, either the user inputs the analytic form of an expression or the compact, the parser is still able to function correctly.To frame the capabilities of the parser, we will now describe how the user can define optimization models in the platform with a given example and the successful parsing to Pyomo. The parser first attempts to split the model into its three major distinct parts: \u2022the objective function \u2022the sets of constraints \u2022the types of the variables definedThese three parts are in a way independent but interconnected as well.string manipulation functions, therefore the use of regular expressions in Python was essential and effective.The parser first attempts to read the variables and their respective domain space (type). The platform is case sensitive since it is based on Pyomo. The processing is done using keywords as identifiers while scanning from the top to the bottom of the manually curated .tex file which contains the abstract model in LaTeX. For the three respective different parts mentioned earlier, the corresponding identifiers are:Reasonably, the focus was on consistency and reliability, rather computational performance mainly due to the lightweight workload of the processing demands in general. In order to do that, the parser uses 1.minimize, maximize}Objective function: { 2.leq, \\geq, =}Sets of constraints: {\\ 3.mathbb , {0,\u00a01}}Variables and their types: {\\.py output model file. Variable types can appear in the following way:This helps separate the processing into sections. Each section is analyzed and passes the information in Pyomo syntax in the \u2022 \\in\u00a0\\mathbb\u00a0R for Real numbers (\u2208\u211d)\u2022 \\in\u00a0\\mathbb\u00a0R_+ for non-negative Real numbers (\u2208\u211d+)\u2022 \\in\u00a0\\mathbb\u00a0R_{*}^{+} for positive Real numbers \u2022 \\in\u00a0\\mathbb\u00a0Z for integers (\u2208\u2124)\u2022 \\in\u00a0\\mathbb\u00a0Z_+ for non-negative integers (\u2208\u2124+)\u2022 \\in\u00a0\\mathbb\u00a0Z_{*}^{+} for positive integers , the parser then creates a list of strings for the names of the variables. This is one of the crucial structures of the parser and utilized alongside the entire run-time of the conversion process. A list of the same length, which holds the types of each respective variable, is also created. The platform in general uses Python lists to store information about variables, index sets, parameters, scalars etc.Our approach for understanding the inter-mathematical relationships between the variables and the parameters relied on exploiting the fundamental characteristics of Linear Programming: \u2022Proportionality \u2022Additivity \u2022Divisibilitydecompose them. By decomposition we define the fragmentation of each mathematical expression at each line of the .tex input model file into the corresponding variables, parameters, summations etc. so as we can process the given information accordingly. A simple graphical example is given in These mathematical relationships can help us understand the structure of the expressions and how to ax to describe coefficient a being multiplied by variable x). In some cases however it is imperative to use the asterisk to decompose a multiplication. For example, say Ds is a parameter and s is also a variable in the same model. There is no possible way to tell whether the expression Ds actually means D*s or if it is about a new parameter altogether, since the parameters are not explicitly defined in the model definition (as in AMLs). Adding to that the fact that for the scalars there is no associated underscore character to identify the parameter as those are not associated with index sets, the task is even more challenging. Therefore, we should write D*s if D is a scalar. As for parameters with index sets, for example Dsisi causes no confusion for the parser because the decomposition based on the underscore character clearly reveals two separate components. In this way, the platform also identifies new parameters. This means that since we know, for instance, that s is a variable but Ds is not, we can dynamically identify Ds on the fly (as we scan the current constraint) as being a parameter which is evidently multiplied with variable s, both having index set i associated with them. However, we need to pay attention on components appearing iteratively in different or in the same sets of constraints; did we have the component already appearing previously in the model again? In that case we do not have to declare it again in the Pyomo model as a new quantity, as that would cause a modeling error.The decomposition with the regular expressions is naturally done via the strings of the possible operators found, that is: addition, subtraction, division , since the asterisk to denote multiplication (\u2217 or \u22c5) is usually omitted in the way we describe the mathematical expressions model. For instance if a set i is identified, the string model.i\u00a0=\u00a0Set(dimen\u00a0=\u00a01) is first written inside the text version of the Pyomo model file, and then on-the-fly executed independently inside the already parsing Python function using the exec command. The execution commands run in a sequential manner. All the different possible cases of relationships between parameters and variables are dynamically identified, and the parser keeps track of the local (per constraint) and global (per model) list of parameters identified while scanning the model in dynamically growing lists.By split function carefully transfers this information intact to the Pyomo model.Dynamic identification of the parameters and index sets is one of the elegant features of the platform, since in most Algebraic Modeling Languages (AMLs) the user explicitly defines the model parameters one-by-one. In our case, this is done in an intelligent automated manner. Another important aspect of the decomposition process is the identification of the constraint type , since the position of the operator is crucial to separate the left and the right hand side of the constraint. This is handled by an independent function. Decomposition also helps identify Quadratic terms. By automatic conversion of the caret symbol to \u2217\u2217 (as this is one of the ways to denote power of a variable in Pyomo language) the Summation terms need to be enclosed inside parentheses (\u22ef), even with a single component. This accelerates identification of the summation terms with clarity and consistency. Summations are in a way very different than processing a simplified mathematical expression in the sense that we impose restrictions on how a summation can be used. First of all, the corresponding function to process summations tries to identify how many summation expressions exist in each constraint at a time. Their respective indexing expressions are extracted and then sent back to the index identification functions to be processed. The assignment of conditional indexing with the corresponding summation is carefully managed. Then, the summation commands for the Pyomo model file are gradually built. Summations can be expressed in the following form, and two different fields can be utilized to exploit conditional indexing (upper and lower brackets): ________________________________________________________________ \u2216sum\u2216\u00a0l\u00a0i\u00a0m\u00a0i\u00a0t\u00a0s\u00a0_\u00a0{p\u00a0:\u00a0\u00a0X_{n\u00a0,\u00a0p}\u00a0=\u00a0\u00a01}\u02c6{} ________________________________________________________________ which then compiles to: p, (that is for p\u00a0=\u00a01:|p|) but only when Xn,p\u00a0=\u00a01 at the same time. If we want to use multiple and stacked summations we can express them in the same way by adding the indexes for which the summation will be generated, as for example:This means that the summation will be executed for all values of _______________________________________________________________________ \u2216\u00a0sum\u00a0\u2216\u00a0l\u00a0i\u00a0m\u00a0i\u00a0t\u00a0s\u00a0_\u00a0{\u00a0i\u00a0,\u00a0j\u00a0}\u02c6{} ________________________________________________________________________ which then compiles to: i,\u00a0j. Dynamic (sparse) sets imposed on constraints can be expressed as:and will run for the full cardinality of sets _____________________________________________________________________________ X\u00a0_{\u00a0i\u00a0,\u00a0j\u00a0}\u00a0=\u00a0Y\u00a0_\u00a0{\u00a0i\u00a0,\u00a0j\u00a0}\u00a0\u2216\u00a0f\u00a0o\u00a0r\u00a0a\u00a0l\u00a0l\u00a0\u00a0\u2216\u00a0in\u00a0\u00a0C\u00a0\u2216\u2216 ______________________________________________________________________________ Xi,j\u00a0=\u00a0Yi,j\u2200\u00a0\u2208\u00a0Cwhich then compiles to: i,\u00a0j) which belong to the dynamic set C. In order to achieve proper and precise processing of summations and conditional indexing, we have built two separate functions assigned for the respective tasks. Since specific conditional indexing schemes can take place both for the generation of an entire constraint or just simply for a summation inside a constraint, two different sub-functions process this portion of information. This is done using the \\forall command at the end of each constraint, which changes how the indexes are being generated for the vertical expansion of the constraints from a specific index set. Concerning summations it is done with the bottom bracket information for horizontal expansion, as we previously saw, for instance, with p:Xn,p\u00a0=\u00a01.This means that the constraint is being generated only for those values of or if a more complex expression is used, the for-loop indexes for the summations are found before the colon symbol (:).A series of challenges arise when processing summations. For instance, which components are inside a summation symbol? A variable that might appear in two different summations at the same constraint can cause confusion. Thus, using a binary list for the full length of variables and parameters present in a constraint we identify the terms which belong to each specific summation. This binary list gets re-initialized for each different summation expression. From the lower bracket of each summation symbol, the parser is expecting to understand the indexes for which the summation is being generated. This is done by either simply stating the indexes in a plain way symbol which then helps understand for which indexes the constraints are being sequentially generated . For instance, \u2200\u00a0\u2208\u00a0C makes sure that the constraint is not generated for all combinations of index sets i,\u00a0j, but only the ones appearing in the sparse set C. The sparse sets are being registered also on the fly, if found either inside summation indexing brackets or in the constraint general indexing (after the \u2200 symbol) by using the keywords \\in, \\notin. The simplest form of constraint indexing is for instance: i and the summation is running for all those values of set j such that i is not equal to j. More advanced cases of constraint conditional indexing are also identified, as long as each expression is separated with the previous one by using a comma. For example in: At the end of each constraint, the parser identifies the \u201c \u2200\u201d\u2019 ( 1.to identify left part (before the operator/reserved keyword/command), 2.the operator and 3.the right-hand part.i\u00a0<\u00a0|i|, the left part is set i, the operator is < and the right-hand part is the cardinality of set i. In this way, by adding a new operator in the acceptable operators list inside the code, we allow expansion of supported expressions in a straightforward manner.For example, in transportation problem: Let us now follow the sequential steps that the parser takes to convert a simple example. Consider the well-known We will now provide in-depth analysis of how each of the main three parts in the model can be processed..tex model file that contains the variable symbols and their respective domains. This is done by trying to identify any of the previously presented reserved keywords specifically for this section. The parser reaches the bottom line by identifying the keyword mathbbR_\u00a0+ in this case. Commas can separate variables belonging to the same domain, and the corresponding parsing function splits the collections of variables of the same domain and processes them separately.The parser first attempts to locate the line of the x. The platform then builds two Python lists with the name of the variables found and their respective types.In this case, the parser identifies the domain and then rewinds back inside the string expression to find the variable symbols. It finds no commas, thus we collect only one variable with the symbol minimize) and tries to identify any involved variables in the objective function. In a different scenario, where not all of the model variables are present in the objective function, a routine identifies one-by-one all the remaining variables and their associated index sets in the block of the given constraint sets.The parser then reads the optimization sense respectively will be analyzed separately. In this case, the upper one is empty, so the lower one contains all the indexes for which the summation has to scale. Separated by commas, a simple extraction gives i,\u00a0j to be used for the Pyomo for-loop in the expression. There is no colon identified inside the lower bracket of the summation, thus no further identification of conditional indexing is required.The parser first attempts to locate any summation symbols. Since this is successful, the contained expression is extracted as split function is then applied on the extracted mathematical expression c_{i,\u00a0j}x_{i,\u00a0j} to begin identification of the involved terms. Since there are no operators we have a list containing only one item; the combined expression. It follows that the underscore characters are used to frame the names of the respective components. It is easy to split on these characters and then create a list to store the pairs of the indexes for each component. Thus, a sub-routine detects the case of having more than just one term in the summation-extracted expression. In this example, c is automatically identified as a parameter because of its associated index set which was identified with the underscore character and since it does not belong to the list of variables.A c, as well as the parameters for the current constraint/objective expression. This helps us clarify which parameters are present in each constraint as well as the set of parameters (unique) for the model thus far, as scanning goes on. Once the parameter c and variable x are identified and registered with their respective index sets, we proceed to read the constraint sets. The parser creates expressions as the ones shown below for this kind of operations:The global list of parameters is then updated by adding ________________________________________________________________________ model\u00a0.\u00a0i\u00a0=\u00a0Set\u00a0(\u00a0dimen\u00a0=1)\u00a0\u2216\u2216 model\u00a0.\u00a0j\u00a0=\u00a0Set\u00a0(\u00a0dimen\u00a0=1)\u00a0\u2216\u2216 model\u00a0.\u00a0c\u00a0=\u00a0Param\u00a0\u00a0\u2216\u2216 model\u00a0.\u00a0x\u00a0=\u00a0Var\u00a0\u00a0\u2216\u2216 ________________________________________________________________________ Since the objective function summation symbol was correctly identified with the respective indexes, the following code is generated and executed: ____________________________________________________________________________________________________________ def\u00a0\u00a0obj\u00a0\u00a0expression\u00a0(\u00a0model\u00a0)\u00a0: \u00a0\u00a0model\u00a0.\u00a0F\u00a0=\u00a0sum\u00a0 \u00a0\u00a0return\u00a0\u00a0model\u00a0.\u00a0F model\u00a0.\u00a0OBJ\u00a0=\u00a0Objective\u00a0 ____________________________________________________________________________________________________________ Since the constraints sets are very similar, for shortness we will only analyze the first one. The parser first locates the constraint type by finding either of the following operators \u2264,\u00a0\u00a0\u2265\u00a0,\u00a0\u00a0=. It then splits the constraint in two parts, left and right across this operator. This is done to carefully identify the position of the constraint type operator for placement into the Pyomo constraint expression later on.a is identified on the fly and since x is already registered as a variable and the parser proceeds to only register the new parameter by generating the following Pyomo expressions:The first component the parser gives is the terms identified raw in the expression __________________________________________________________________________________________ X in the following piece of code:The platform successfully identifies which terms belong to the summation and which do not and separates them carefully. Eventually the \u2200 symbol gives the list of indexes for which the constraints are being generated. This portion of information in the structure of a Pyomo constraint definition goes in replacing _________________________________________________________________________________________ def\u00a0\u00a0axb\u00a0_\u00a0constraint\u00a0_\u00a0\u00a0rule\u00a0_\u00a01\u00a0\u00a0: __________________________________________________________________________________________ and the full resulting function is: __________________________________________________________________________________________ def\u00a0\u00a0axb\u00a0_\u00a0constraint\u00a0_\u00a0rule\u00a0_\u00a01\u00a0\u00a0: \u00a0\u00a0model\u00a0.\u00a0C\u00a0_\u00a01\u00a0=\u00a0sum\u00a0\u00a0<=\u00a0model\u00a0.\u00a0a\u00a0[\u00a0i\u00a0] \u00a0\u00a0return\u00a0\u00a0model\u00a0.\u00a0C\u00a0_\u00a01 model\u00a0.\u00a0AxbConstraint\u00a0_\u00a01\u00a0=\u00a0Constraint\u00a0 __________________________________________________________________________________________ understand almost every different way of writing mathematical models using LaTeX\u00a0is nearly impossible; however, even by framing the way the user could write down the models, there are some challenges to overcome. For instance, the naming policy for the variables and parameters. One would assume that these would cause no problems but usually this happens because even in formal modeling languages, the user states the names and the types of every component of the problem. Starting from the sense of the objective function, to the names and the types of the variables and parameters as well as their respective sizes and the names of the index sets, everything is explicitly defined. This is not the case though in this platform; the parser recognizes the parameters and index sets with no prior given information. This in turn imposes trade-offs in the way we write the mathematical notation. For instance, multiple index sets have to be separated by commas as in xi,j instead of writing xij.Developing a parser that would be able to On the other hand, using symbolic representation of the models in LaTeX\u00a0can enable the user quickly identify errors in the description of the model, the involved variables, parameters or their mathematical relationships therein. This as opposed trying to debug models that have been developed directly in a programming language or in an AML, which would make the detection of such errors or typos more challenging.By scanning a constraint, the parser quickly identifies as mentioned the associated variables. In many cases parameters and variables might have multiple occurrences in the same constraint. This creates a challenging environment to locate the relationships of the parameters and the variables since they appear in multiple locations inside the string expressions and in different ways. On top of this, the name of a parameter can cause identification problems because it might be a sub/super set of the name of another parameter, e.g., parameter AB, and parameter ABC. Therefore naming conflicts are carefully resolved by the platform by meticulously identifying the exact location and occurrences of each term.The CPU time required for each step in the modeling process of the platform can be found in the Supplementary Information. It can be noted that the parser is the least consuming step, which clearly demonstrates the efficiency of the platform. The Pyomo model generation and solver (CPLEX in our measurements) steps and their associated CPU-time are completely outside of the parser\u2019s control. However, it is essential to get an idea of how these timings compare to each other with the addition of this extra higher level of abstraction in the beginning of the modeling process.Challenges also arise in locating which of the terms appearing in a constraint belong to summations, and to which summations; especially when items have multiple occurrences inside a constraint, there needs to be a unique identification so as to include a parameter (or a variable) inside a specific summation or not. We addressed this with the previously introduced binary lists. Then, for each of those summation symbols, the items activated (1) are included in the summation or not (0) and the list is generated for each different summation within the expression.Additionally, another challenge constitutes the extension of the platform to support nonlinear terms, where each term itself can be a combination of various operators and mathematical functions.Finally, it is worth mentioning that the amount of lines/characters to represent a model in LaTeX\u00a0in comparison with the equivalent model in Pyomo is substantially smaller. In this respect, the platform accelerates the modeling development process.left) which would capture more complex mathematical relationships.We presented a platform for rapid model generation using LaTeX\u00a0as the input language for mathematical programming, starting with the classes of LP, MILP and MIQP. The platform is based on Python and parses the input to Pyomo to successfully solve the underlying optimization problems. It uses a simple GUI to facilitate model and data input based on Django as the web-framework. The user can exploit locally installed solvers or redirect to NEOS server. This prototype platform delivers transparency and clarity, speedup of the model design and development process (by significantly reducing the required characters to type the input models) and abstracts the syntax from programming languages and AMLs. It therefore delivers reproducibility and the ability to replicate scientific work in an effective manner from an audience not necessarily versed in coding. Future work could possibly involve expansion to support nonlinear terms as well as differential and algebraic equations, sanity checking and error catching on input, the ability to embed explanatory comments in the input model file which would transfer to the target AML, extending the functionality concerning bounds on the variables as well as adding further support to built-in LaTeX\u00a0commands (such as \u221610.7717/peerj-cs.161/supp-1Supplemental Information 1Click here for additional data file."} +{"text": "HemaSphere. 2020;4:e448), there was an error in the original published title. \u201cLife-threatening\u201d instead appeared as \u201cLife-threating.\u201d The title has now been corrected online:In the article entitled \u201cCoronavirus Disease 2019 in Recipient of Allogeneic Hematopoietic Stem Cell Transplantation: Life-threatening Features Within the Early Post-engraftment Phase\u201d (https://journals.lww.com/hemasphere/Fulltext/2020/08000/Coronavirus_Disease_2019_in_Recipient_of.14.aspx"} +{"text": "Ultrasonic vocalizations (USVs) analysis is a well-recognized tool to investigate animal communication. It can be used for behavioral phenotyping of murine models of different disorders. The USVs are usually recorded with a microphone sensitive to ultrasound frequencies and they are analyzed by specific software. Different calls typologies exist, and each ultrasonic call can be manually classified, but the qualitative analysis is highly time-consuming. Considering this framework, in this work we proposed and evaluated a set of supervised learning methods for automatic USVs classification. This could represent a sustainable procedure to deeply analyze the ultrasonic communication, other than a standardized analysis. We used manually built datasets obtained by segmenting the USVs audio tracks analyzed with the Avisoft software, and then by labelling each of them into 10 representative classes. For the automatic classification task, we designed a Convolutional Neural Network that was trained receiving as input the spectrogram images associated to the segmented audio files. In addition, we also tested some other supervised learning algorithms, such as Support Vector Machine, Random Forest and Multilayer Perceptrons, exploiting informative numerical features extracted from the spectrograms. The performance showed how considering the whole time/frequency information of the spectrogram leads to significantly higher performance than considering a subset of numerical features. In the authors\u2019 opinion, the experimental results may represent a valuable benchmark for future work in this research field. Rodent models are good tools for scientific research because they allow to study and understand the biological mechanisms underlying the different pathologies. Behavioral alterations in animal models offer markers for the symptoms of the human diseases . The stuThe USVs are generally recorded with an ultrasound sensitive microphone and they are analyzed by specific software applications. Each syllable can be classified manually based on specific features, such as frequency, duration, amplitude and general shape. The manual classification provides a detailed characterization, but it is very time-consuming and may be subject to personal interpretation. Even using specific and professional tools, this task usually takes a lot of time and it appears to be only partially automatized. Indeed, it is possible to automatically analyze quantitative parameters , but it is more difficult to evaluate qualitative parameters, such as the different typologies of ultrasonic calls. For these reasons, it could be very interesting to find a method that automatically processes vocalizations starting from the audio tracks. This fundamental and challenging step could speed up the analysis of ultrasonic communication and also provide insight into the meaning of different USVs.ad hoc method for automatic USVs classification on the basis of the well-known USVs classification pattern published by Scattoni and colleagues Reviewers' comments:Reviewer's Responses to QuestionsComments to the Author1. Is the manuscript technically sound, and do the data support the conclusions?The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: PartlyReviewer #2: Yes**********2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: NoReviewer #2: Yes**********3. Have the authors made all data underlying the findings in their manuscript fully available?PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data\u2014e.g. participant privacy or use of data from a third party\u2014those must be specified.The Reviewer #1: NoReviewer #2: No**********4. Is the manuscript presented in an intelligible fashion and written in standard English?PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.Reviewer #1: YesReviewer #2: Yes**********5. Review Comments to the AuthorPlease use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. Reviewer #1: The authors developed several classifiers capable of classifying individual vocalizations according to their spectrographicshapes as described by Scattoni et al, 2008. The work employs a quite comprehensive set of machine learning methodsincluding Support Vector Machines(SVM), Random Forests(RF), fully connected and convolutional artificial neural networks .Two-dimensional spectrograms were used as the input data for CNN-based classifiers. For non-convolutional ANN, SVM andRF classifiers the vocalizations were represented as the sets of 16 automatically extracted features.Although the authors made significant effort in optimizing the hyperparameters of SVM and RF classifiers, the set of features chosenrepresenting the vocalizations seems to be inadequate for the task of spectrogram shape classification.It can be hard to differentiate 'chevron' from 'complex' using only overall and marginal max/mean/min values. The confusionmatrices provided in the paper confirm that.The description of the architecture of ANN provided by the authors lacks details. Two convolutional layers onlycan be insufficient for good performance in image classification tasks. The authors don't mention whether they use any ofthe overfitting-prevention techniques such as batch normalization, dropout, weight regularization and data augmentation.The fact that the CNN applied to full spectrograms performs worse than SVM applied to the ambiguous feature setindicates that more effort can be invested in the optimization of CNN architecture and the training protocol.Note also the performances shown by CNNs in much more complex image classification tasks (CIFAR100) andthe RF performance demonstrated by Vogel at al, 2019 solving a similar problem.According to my assessment, the authors need to address the major and minor points listed below.Major1. For statistical confidence, the cross validation technique(e.g. 10-fold) should be applied for model assessment andthe results should be shown together with their confidence intervals2. The feature set used for RF, SVM and non-convolutional ANN input doesn't seem to be adequate for the desired task.The authors should extend the feature set and/or use entire frequency envelope data as the input.3. The architecture of ANNs should employ widely used techniques for overfitting prevention: batch normalization/dropout. It would begood to employ data augmentation to help the CNN perform better. The batch size alsocan be increased to help improve the performance and/or convergence speed.Minor1. SVM OVA approach requires confidence estimation. Which confidence estimation approach was used?2. ANN architecture should be described more thoroughly: convolutional layers kernel sizes, strides, dropout/batch normalization usage,stochastic gradient descent optimizer used, weight initialization strategy.3. The report of Vogel et al, 2019'Quantifying ultrasonic mouse vocalizations using acoustic analysis in asupervised statistical machine learning framework'solving a similar task should be mentioned4. Strain data is not used in the analysis anyhow.It would be interesting to compare the accuracies of two strains' shape prediction results taken separately.Typos96 'can not applies'124 'postnatal (PND)'Reviewer #2: Overview: The authors manually segmented and labeled an impressive number of mouse USVs (48699) according to categories developed by Scattoni and colleagues (2008). A sampling of 1199 USVs per category was used to train several supervised classification algorithms. Support Vector Machines (SVM), Random Forests (RF) and Artificial Neural Networks (ANN), in several configurations. The authors conclude that the best results are obtained by Support Vector Machines with the One-VS-All configuration. However, no method was particularly accurate. Precision, recall, and accuracy fell between %51.4 and 68.5% for all classifiers. I believe the moderate accuracy of the classifiers described in this report are not due to any deficiencies in methodology employed by the authors, rather they are due to the fundamental inaccuracy of human defined USV classification.Main Issues:The original creator of these particular USV categories, Maria Luisa Scattoni, has already published a paper using support vector machines (SVM) and random forests (RF) to categories USVs. They achieved 85% recall. What more does this paper add?https://doi.org/10.1038/s41598-019-44221-3Vogel, A.P., Tsanas, A. & Scattoni, M.L. Quantifying ultrasonic mouse vocalizations using acoustic analysis in a supervised statistical machine learning framework. Sci Rep 9, 8100 (2019). Experimenter derived call categories are generally falling out of favor. Substantial new evidence suggests that USVs don\u2019t categorize neatly into discrete groups, including the data in this paper. SVMs/RFs are capable of much higher accuracy when the training data actually comes from discrete groups with clear separations. The \u201cshort\u201d calls in the present manuscript are a well-defined group and are thus categorized accurately (>90%). But other calls like \u201ccomplex\u201d and \u201ccomposite\u201d are frequently miss-categorized. This likely isn\u2019t a fault of the classifier. Rather, it captures the uncertainty within the human created training data.https://doi.org/10.1101/870311Tim Sainburg, Marvin Thielk, Timothy Q Gentner (2019) Latent space visualization, characterization, and generation of diverse vocal communication signals. bioRxiv 870311; doi: Many methods are now available for un-biased call classification. These categories can then be validated through the behavioral/contextual usage of the calls. Call categories that are used identically during behavior can be collapsed. This method is more sophisticated and ethologically relevant than creating categories based on experimenter visual inspection.https://doi.org/10.1038/s41593-020-0584-zSangiamo, D.T., Warren, M.R. & Neunuebel, J.P. Ultrasonic signals associated with different types of social behavior of mice. Nat Neurosci 23, 411\u2013422 (2020). The training data, code, and finalized classifiers are the most valuable elements of this manuscript, but I did not see any indication that they will be distributed to the field. Without this, the manuscript just describes their internally used classifier with moderate accuracy.Final Thoughts:While there is nothing wrong with the scientific methodology employed in this paper, I would like to see all of the issues above addressed before considering the manuscript for publication.**********what does this mean?). If published, this will include your full peer review and any attached files.6. PLOS authors have the option to publish the peer review history of their article digital diagnostic tool,\u00a0 30 Sep 2020We would like to take this opportunity to sincerely thank the Reviewers for their valuable comments. We have carefully revised our manuscript, taking all the comments and suggestions into consideration.General comments__________________________________________________________________________First, we list the main efforts we have spent to enhance the experimental phase, in accordance with the Reviewers\u2019 recommendations:1) Expansion of the statistical analysis of the results;2) Rerun of the experiments after:a. Extending the standard feature space, for feature-based learning algorithms;b. Improving the Convolutional Neural Network (CNN) architecture by also including pertinent techniques for overfitting prevention;3) Implementation of two new sets of experiments, namely:a. Classification after splitting the dataset into the two genotype-based datasets;b. Classification performed by removing one class.The new experiments have led to an overall improvement in the classification performance, especially when the CNN is used. Of course, the updated results are reported in the new version of the paper. We change the title of the manuscript in \u201cAutomatic classification of mice vocalizations using Machine Learning techniques and Convolutional Neural Networks\u201d, suggesting the importance of use of the Convolutional Neural Networks in our paper.Beyond the experiment aspect, another point that has importantly contributed to the paper revision is related to the work that was suggested by both Reviewers, namely:Vogel, A.P., Tsanas, A. & Scattoni, M.L. Quantifying ultrasonic mouse vocalizations using acoustic analysis in a supervised statistical machine learning framework. Sci Rep 9, 8100 (2019).Accordingly, in the revised paper we discuss the main points that differentiate our work from the above cited paper, which has been properly referenced therein. Furthermore, we want to point out that now we have shifted our attention to the classification method based on CNN architecture, since in the 2019 Scientific Reports paper some traditional machine learning (ML) algorithms were already tested. Consequently, the discussion on the ML techniques results to be more limited with respect to the first submission. This change of focus is also motivated by the fact that the new experiments, carried out in the revision stage, have shown notably superior performance when using CNN as stated above. In addition, as pointed out in the 3) b. item above listed, in order to make the comparison as fair as possible, we have also implemented a further set of experiments, in which the classification has been performed by removing the class harmonic, as done in Vogel\u2019s paper.As a final note, we also mention that we will make both the dataset and the implementation code publicly available.In the following, we reply item by item to the specific Reviewers\u2019 comments.Response to Reviewer #1__________________________________________________________________________Comment 1 (major)\u2022 For statistical confidence, the cross-validation technique (e.g. 10-fold) should be applied for model assessment and the results should be shown together with their confidence intervals.Response: Following this valuable suggestion, we have applied 10-fold cross-validation for model assessment. In the revised paper, we have reported the results as mean \u00b1 standard deviation.__________________________________________________________________________Comment 2 (major) and Comment 3 (minor)\u2022 The feature set used for RF, SVM and non-convolutional ANN input doesn\u2019t seem to be adequate for the desired task. The authors should extend the feature set and/or use entire frequency envelope data as the input.\u2022 The report of Vogel et al, 2019 \u2018Quantifying ultrasonic mouse vocalization using acoustic analysis in a supervised statistical machine learning framework\u2019 solving similar task should be mentioned.Response: The problem expressed in Comment 2 (major) also implicitly deals with Comment3 (minor), therefore we would like to address both the comments in the same answer.We agree with the Reviewer that the results obtained by using the feature set described in the original paper suggest a perfectible ability to generalize the investigated machine learning models. As a matter of fact, we have extended the original feature set by including other features extracted from the spectrograms.The new features selection has been based from the paper suggested by the Reviewer . We have carefully read that report, in which the authors propose an optimal feature subset of 8 acoustic measures for USV classification. They are duration, quart 50 (start), peak freq (stddeventire), peaktopeak, max freq (stddeventire), peak freq (start), peak freq (end) and quart 75 (start). Since 4 of these 8 features are already included in the original feature set, namely, duration, peak freq (start), peak freq (end) and peak freq (stddeventire), we have just added the remaining 4 features, leading to a final feature space formed by 20 features .The new performances outperform the ones referred to the submitted paper, even if the strongest improvement has been obtained by enhancing the CNN architecture, as pointed out in the next response.______________________________________________________________________________Comment 3 (major) and Comment 2 (minor)\u2022 The architecture of ANNs should employ widely used techniques for overfitting prevention: batch normalization/dropout. It would be good to employ data augmentation to help the CNN perform better. The batch size also can be increased to help improve the performance and/or convergence speed.\u2022 ANN architecture should be described more thoroughly: convolutional layers kernel sizes, strides, dropout/batch normalization usage, stochastic gradient descent optimizer used, weight initialization strategy.Response: Regarding the issues detailed in Comment 3 (major), it is undoubtable that overfitting seems to afflict the proposed Convolutional Neural Network (CNN). The previously reported accuracy in training and testing phases are 87.0% and 58.3%, respectively, indicating that our model had some problem to generalize new examples not included in the training dataset.Agreeing with the reviewer suggestions, some techniques for overfitting prevention have been implemented, listed in the following:a. Regularization, i.e., dropout;b. Batch normalization;c. Testing different batch sizes;d. Data augmentation.In order to properly illustrate how we have implemented the just mentioned techniques, it can be helpful to start by describing the CNN architecture, that in turn also answers to Comment 2 (minor). First of all, we want to point out that substantial changes have been applied in the model architecture: indeed, the introduction of methods to prevent overfitting came with an increase of the architecture complexity.The final model is composed by 5 convolutional layers constituted by 32, 64, 64, 128 and 128 filters, respectively. The convolutional layers kernel sizes are 7x7 and 5x5 in the first two layers, and 3x3 in the successive three. The stride is set to 1 for each layer.A max-pooling layer is inserted after each convolutional layer. The pooling size is fixed to 3x3, while the stride is set to 2x2. The tensor is then flattened, and two fully connected layers are inserted, the first of size 1024 with ReLU activation function, and the second of size 10 with Softmax activation function. We mention that the Adam optimization algorithm has been used as stochastic gradient descent optimizer, and that the uniform distribution has been adopted to initialize the network weights. The description of the final CNN architecture has been thoroughly illustrated in the new version of the paper.To prevent overfitting, the strategies that we referred above as a., b., c., and d., have been tested. Dropout, i.e. a., deletes a random sample of the activations during training. Of course, dropout causes information loss, in particular, losing something in the first layer propagates that loss to the whole network. We have designed several configurations, i.e., we have located dropout layers in different positions and tested a range of rates (the fraction of input units to drop), from 0.2 to 0.8. The batch normalization technique (b.) has been similarly evaluated: it helps to coordinate the update of multiple layers in the model. We have experimented batch normalization layers both before and after the activation functions, and in parallel we have considered different batch sizes (c.) in the training phase.In the end, after this thorough hyperparameters optimization/tuning, the best model resulted the one with a dropout layer after each convolutional/max-pooling layer, except the first one, setting the rates to 0.2, 0.3, 0.4 and 0.5. Then, a further dropout layer was added after the first fully-connected layer , and finally batch normalization was included before the Softmax activation function of the last fully-connected layer. The batch size finally adopted is 32. By applying all of these adjustments, the performance of the final CNN architecture has notably improved: indeed, the test accuracy has been enhanced from 58.3% to 78.8%.Here is a final note on data augmentation (d.): given the characteristics of the spectrograms, some image transformations cannot be analyzed, since they would invalidate the ground-truth classification. For example, rotation or horizontal/vertical flipping could lead the network to confuse the upward class with the downward class, and vice-versa. For this reason, and also as suggested by the reviewer, we have increased the dataset by shifting and cropping the spectrogram images at random: however, the experiments have shown that data augmentation does not significantly improve the performance of the model. As with the other points, the discussion on overfitting prevention has been properly argued in the revised paper.______________________________________________________________________________Comment 1 (minor)\u2022 SVM OVA approach requires confidence estimation. Which confidence estimation approach was used? Response: For SVM OVA method, we set a maximum confidence strategy, similar to the weighted voting strategy from OVO systems. So, the output class is taken from the classifier with the largest positive answer. We have reported this detail in the revised version of the paper.______________________________________________________________________________Comment 4 (minor)\u2022 Strain data is not used in the analysis anyhow. It would be interesting to compare the accuracies of two strains' shape prediction results taken separately. Response: We have taken into consideration this valuable suggestion. As a matter of fact, in the revised paper we have included a new set of experiments, by testing the classification methods on the genotype-based datasets taken separately.______________________________________________________________________________Typos\u2022 96 'can not applies' \u2022 124 'postnatal (PND)' Response: We have corrected the typos indicated by the Reviewer.Response to Reviewer #2__________________________________________________________________________Comment 1\u2022 The original creator of these particular USV categories, Maria Luisa Scattoni, has already published a paper using support vector machines (SVM) and random forests (RF) to categories USVs. They achieved 85% recall. What more does this paper add?Vogel, A.P., Tsanas, A. & Scattoni, M.L. Quantifying ultrasonic mouse vocalizations using acoustic analysis in a supervised statistical machine learning framework. Sci Rep 9, 8100 (2019). Response: We have meticulously read the suggested report, and we agree that it is important to shed light on the improvements led by our work with respect to Vogel\u2019s one. We want to focus here on the two main breakthroughs that in our opinion are worthy to be pointed out.First, we made a significant effort to build our dataset. We performed our experiments on 48699 samples, which is a much more sizable dataset than the 225 samples evaluated in the suggested paper. It is well-known that the bigger the sample size is, the more accurate the research results are. That is because larger dataset cardinality allows to better determine the average values of data and reduce potential errors from testing a small number of possible outliers. As a consequence, the statistical analysis becomes more accurate and the margin of error smaller. Given these crucial considerations, we believe that our results benefit from a very effective statistical soundness, and thus that they may represent a further valuable benchmark for future works in this research field. Of course, the dataset will be made publicly available.The second aspect we want to mention is that in our work we explored the classification ability of a Convolutional Neural Network (CNN), extending the analysis beyond traditional machine learning algorithms, such as Support Vector Machine and Random Forest, employed in Vogel\u2019s paper. To the best of the authors\u2019 knowledge, this work is the first to propose a deep learning architecture aimed to classify the ten categories originally introduced in [17]. Furthermore, the CNN model has been notably modified during the revision phase. Accordingly, we have included all of these modifications in the revised paper. In particular, if compared with the original proposed architecture, we have changed the network topology by inserting more convolutional layers and increased the number of the filters composing them, in parallel to have investigated some techniques for overfitting prevention, such as regularization (drop out) and batch normalization (please find more details in the new version of the paper). As a consequence of such improvements, the performance of the final CNN model now achieves an accuracy of 78% on the test dataset, strongly improving the results reported in the original paper (where the test accuracy was 58.3 %).\u2022 Experimenter derived call categories are generally falling out of favor. Substantial new evidence suggests that USVs don\u2019t categorize neatly into discrete groups, including the data in this paper. SVMs/RFs are capable of much higher accuracy when the training data actually comes from discrete groups with clear separations. The \u201cshort\u201d calls in the present manuscript are a well-defined group and are thus categorized accurately (>90%). But other calls like \u201ccomplex\u201d and \u201ccomposite\u201d are frequently miss-categorized. This likely isn\u2019t a fault of the classifier. Rather, it captures the uncertainty within the human created training data.Tim Sainburg, Marvin Thielk, Timothy Q Gentner (2019) Latent space visualization, characterization, and generation of diverse vocal communication signals. bioRxiv 870311.Response: We think that this different perspective is very interesting. We agree that automatic USVs classification into a fixed number of discrete groups is a challenging task, as also suggested by the paper cited in the comment. Indeed, the results reported in the originally submitted paper confirmed the difficulty of distinguishing particular classes, such as complex and composite.Nevertheless, even if manual classification is an undeniably time-consuming activity and also vulnerable to personal interpretation, it can be performed within a more than reasonable margin of error, especially when the general shape of the spectrogram is analyzed.Even acknowledging an intrinsic uncertainty in the human labelling of data, in our work we addressed the problem of designing efficient and accurate machine learning and deep learning models, aimed to automatically classify vocalizations starting from the audio tracks. The performance obtained from the new set of experiments suggests that crucial improvements have been implemented to the originally proposed models. Indeed, the new results show a lesser uncertainty in USVs categorization, indicating that well defined models can actually lead to significant performance even when discrete vocalizations classification is investigated.This discussion has been inserted also in the revised paper, citing the suggested article.______________________________________________________________________________Comment 2\u2022 Many methods are now available for un-biased call classification. These categories can then be validated through the behavioral/contextual usage of the calls. Call categories that are used identically during behavior can be collapsed. This method is more sophisticated and ethologically relevant than creating categories based on experimenter visual inspection.Sangiamo, D.T., Warren, M.R. & Neunuebel, J.P. Ultrasonic signals associated with different types of social behavior of mice. Nat Neurosci 23, 411\u2013422 (2020). doi.org/10.1038/s41593-020-0584-z). This permits to associate distinct vocalizations categories with different types of murine social behavior giving important information to calls meaning. Unfortunately, we do not have these sophisticated instruments to associate behaviors to calls. In addition, we recorded vocalizations emitted by pups and not adults and in pups it is not possible to associate calls to behaviors as done by Sangiamo and colleagues because pups are still very young (first days after birth) and unable to move or perform more complex actions. They still have closed eyes. Finally, we agree with the Reviewer with the idea that it is very interesting to understand the meaning of mice calls and so, in the future, we would like to perform more sophisticated and ethologically relevant analysis on adult mice.Response: In the literature different methods of calls classification exist. We used a method based on experimenter visual inspection referring to papers of Dr. Scattoni cited in our references. This method permits to the operator to classify vocalizations based on features as frequency content, duration, amplitude and general shape. This classification provides a detailed USV characterization, but it says nothing about the meaning of the calls. Recently, new technologies have been developed such as a microphone array system and sound source localization method, to localize and assign USVs to individual mice during a social context 6, 8 ...\" -> \"postnatal day (PND) 6, 8...\"2. Threshold and hold time parameter values used in Avisoft for vocalization extraction should be provided here.Description of the experiments1. 'the first step consisted into' -> 'the first step consisted of'2. 'giving this way a statistical soundness' -> 'ensuring the statistical soundness' ?Features extraction section1. 'identified by the color...' \u00a0-> 'identified by the brightness...'Support vector machines1. 'Maximum confidence strategy was used...' - how the confidence was estimated? SVM provides binary result out of the box, the actual confidence assessment approaches vary.plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.Please submit your revised manuscript by Dec 21 2020 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at Please include the following items when submitting your revised manuscript:A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocolsIf applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see:\u00a0We look forward to receiving your revised manuscript.Kind regards,Gennady Cymbalyuk, Ph.D.Academic EditorPLOS ONE[Note: HTML markup is below. Please do not edit.]Reviewers' comments:Reviewer's Responses to QuestionsComments to the Author1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the \u201cComments to the Author\u201d section, enter your conflict of interest statement in the \u201cConfidential to Editor\u201d section, and submit your \"Accept\" recommendation.Reviewer #1:\u00a0All comments have been addressedReviewer #2:\u00a0All comments have been addressed**********2. Is the manuscript technically sound, and do the data support the conclusions?The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1:\u00a0YesReviewer #2:\u00a0Yes**********3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1:\u00a0YesReviewer #2:\u00a0Yes**********4. Have the authors made all data underlying the findings in their manuscript fully available?PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data\u2014e.g. participant privacy or use of data from a third party\u2014those must be specified.The Reviewer #1:\u00a0NoReviewer #2:\u00a0Yes**********5. Is the manuscript presented in an intelligible fashion and written in standard English?PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.Reviewer #1:\u00a0NoReviewer #2:\u00a0Yes**********6. Review Comments to the AuthorPlease use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. Reviewer #1:\u00a0Though the authors addressed the points noted in the previous review, the document undergone majorchanges that gave rise to several more major and minor inconsistencies to be addressed.Additionally, for the next review round, I would ask the authors to add line numbers in the documentand to provide the source code for the models described in the paper.Major1. The Random Forests, MP, Stacking NN classification performance results are completely omitted in the paper.SVM performance results are hard to find in the text. At the same time, both RF and SVM are mentioned in the abstract andmultiple times throughout the text. The results of all classifiers mentioned in the paper should be presented in a table.If the resulting accuracy is especially bad and is not worth considering than it should be stated explicitly.Note that the results are compared with the results of RF classifier in Vogel et al, 2019 paper, so the omitting of the RF resultslooks strange.2. 'Data-preprocessing' paragraph. 'This operation prevent imbalanced dataset...' - usually, that is not the mainreason to apply data augmentation when training CNN. To deal with imbalanced data one can just use weighted loss function, asyou mention further in the paragraph. Data augmentation is used mostly to prevent overfitting,generating a potentially infinite, though maybe not diverse enough set of training samples.From your description it is not clear which approach for data augmentationis used for the best-performing CNN: statically generated examples with random crops and shifts,just to balance the source data set,or the on-the-fly generation of randomly altered samples during the training,together/without using loss function weighting?That should be stated explicitly.Additionally, I think that is not necessary to explain that one should not rotate on flip images in that case.'data augmentation' is a general term for generation of pseudo-diverse samples andthat is quite obvious that one should not use the techniques applied for object photographs classification here...3. 'Proposed classification methods' paragraph : you briefly explain the principles of CNN and only after that you startexplaining trivial theory of MP, talking about weights, activation functions and neurons.'Multilayer perceptron' sub-section should go first, since it describes the most basic things on which CNN is based as well.Furthermore, I don't think Fig.2 is worth placing in the paper. That diagram is quite trivial, it occupies half of a pageand is known since 70s. It can be found in any textbook about ANN basics.Why not to use that space to depict the actual architecture of the CNN + ANNs you designed,maybe together with some trained kernel weights visualization or other data?For example, see the figures in our recently published workIvanenko et al, 2020, \"Classifying sex and strain from mouse ultrasonic vocalizations using deep learning\".MinorAbstract1. tested some supervised ... -> tested some other supervised ...2. extracted by the spectrograms -> extracted from the spectrogramsIntroduction1. \"... on such a fixed repertoire of calls typologies\". I think this should be rephrased.May be \"on a predefined set of call types\" ?2. Please cite our paperIvanenko et al, 2020, \"Classifying sex and strain from mouse ultrasonic vocalizations using deep learning\", PLOS CBin the introduction. Though we don't use Scattoni classical vocalization types there, we alsoclassify vocalizations using CNNs basing on their spectrogram shape , thus implementing the 'top-down' approach you mentioned.5. 'Simulation results' - I think the word 'simulation' is misleading here.USV Recording and analysis1. \"postnatal (PND) 6, 8 ...\" -> \"postnatal day (PND) 6, 8...\"2. Threshold and hold time parameter values used in Avisoft for vocalization extraction should be provided here.Description of the experiments1. 'the first step consisted into' -> 'the first step consisted of'2. 'giving this way a statistical soundness' -> 'ensuring the statistical soundness' ?Features extraction section1. 'identified by the color...' -> 'identified by the brightness...'Support vector machines1. 'Maximum confidence strategy was used...' - how the confidence was estimated? SVM provides binary result out of the box,the actual confidence assessment approaches vary.Reviewer #2:\u00a0The authors have made significant efforts to improve their CNN based classification architecture and have at least discussed and considered the theoretical limitations I posed in review. I agree that the scale of the new experiment and improved classification accuracy now expand upon, rather then duplicate, the work of the Scattoni Lab. With the addition of a publicly available dateset and classification CNN, this work now makes a tangible contribution to the field and I recommend it be accepted for publication.**********what does this mean?). If published, this will include your full peer review and any attached files.7. PLOS authors have the option to publish the peer review history of their article digital diagnostic tool,\u00a0 4 Dec 2020We would like to take this opportunity to sincerely thank everyone involved in the review process. We are glad that the adjustments introduced in response to reviewers' comments process have been generally appreciated. The further changes suggested by the reviewer #1 are detailed below.Response to Reviewer #1__________________________________________________________________________Comment 1 (major) The Random Forests, MP, Stacking NN classification performance results are completely omitted in the paper. SVM performance results are hard to find in the text. At the same time, both RF and SVM are mentioned in the abstract and multiple times throughout the text. The results of all classifiers mentioned in the paper should be presented in a table. If the resulting accuracy is especially bad and is not worth considering than it should be stated explicitly.Note that the results are compared with the results of RF classifier in Vogel et al, 2019 paper, so the omitting of the RF results looks strange.Response: As mentioned in the previous response process, in the first revised paper we had moved the focus from the \u201cnumerical features\u201d-based classification techniques, both for distancing our work with respect to Vogel\u2019s paper (as requested by both the reviewers), and also because the new results obtained by using the Convolutional Neural Networks had outperformed the others. Accordingly, we had considered not interesting and redundant the list of all the performance for all the investigated techniques, with that not for that intending to diminish the importance of studying those methods in our work.However, we agree with the reviewer that a table containing the overview of all the results may benefit the completeness of the paper. As a matter of fact, in the new version of the paper, we have inserted such a table in the \u201cPerformance analysis\u201d section. Nonetheless, to streamline the paper, we just report the performance related to the entire mice vocalizations dataset, since the strain-based performances result are similar to the ones obtained for the entire dataset, and so they do not add significant extra information.Finally, the source code has been properly integrated as well.__________________________________________________________________________Comment 2 (major) 'Data-preprocessing' paragraph. 'This operation prevents imbalanced dataset...' - usually, that is not the main reason to apply data augmentation when training CNN. To deal with imbalanced data one can just use weighted loss function, as you mention further in the paragraph. Data augmentation is used mostly to prevent overfitting,generating a potentially infinite, though maybe not diverse enough set of training samples.From your description it is not clear which approach for data augmentationis used for the best-performing CNN: statically generated examples with random crops and shifts, just to balance the source data set, or the on-the-fly generation of randomly altered samples during the training, together/without using loss function weighting? That should be stated explicitly.Additionally, I think that is not necessary to explain that one should not rotate on flip images in that case. 'data augmentation' is a general term for generation of pseudo-diverse samples and that is quite obvious that one should not use the techniques applied for object photographs classification here...Response: Thanks for pointing this out. We take this space to better clarify how data augmentation has been used in our experiments, and we will report these considerations in the new version of the paper.For the CNN design, we have tested data augmentation for both overfitting and imbalanced dataset prevention, separately. In the first case, an offline generation of new examples with random shifts and crops is applied over the entire training dataset; when using data augmentation for this aim, a properly weighted loss function is employed during the training in order to handle the imbalanced dataset problem too. In the second one, data augmentation is performed to balance the source dataset, and so by generating new samples just for the under-represented classes.However, note that data augmentation has been tested in the experimental phase, but it has not been in the end used to obtain the final best-performing CNN model. Indeed, the experimental tests have revealed that just including dropout and batch normalization layers is the most efficient action for overfitting prevention (as reported in the \u201cClassification via CNN\u201d section). Furthermore, the experiments have also provided more uniform accuracy values across the classes when the loss function is properly weighted instead of repopulating the under-represented classes by implementing data augmentation. In conclusion, data augmentation has not been exploited for designing the most-performing CNN architecture.As mentioned before, in the new version of the paper, we have better clarified all these aspects by partially re-writing the former section \u201cData pre-processing\u201d, which is now titled \u201cClass imbalance handling\u201d. Therein, the reviewer will find that all his/her observations above have been properly considered. __________________________________________________________________________Comment 3 (major) 'Proposed classification methods' paragraph: you briefly explain the principles of CNN and only after that you start explaining trivial theory of MP, talking about weights, activation functions and neurons. 'Multilayer perceptron' sub-section should go first, since it describes the most basic things on which CNN is based as well.Furthermore, I don't think Fig.2 is worth placing in the paper. That diagram is quite trivial, it occupies half of a page and is known since 70s. It can be found in any textbook about ANN basics.Why not to use that space to depict the actual architecture of the CNN + ANNs you designed, maybe together with some trained kernel weights visualization or other data? For example, see the figures in our recently published workIvanenko et al, 2020, \"Classifying sex and strain from mouse ultrasonic vocalizations using deep learning\".Response: Thanks for the suggestion. We agree with the reviewer that the paper may be clearer by swapping the \u201cConvolutional Neural Networks\u201d and \u201cMultilayer Perceptron\u201d sub-sections.In the revised version of the paper, we have modified the description of the multilayer perceptrons without the support of Fig. 2. In its place, there is a new figure depicting the entire architecture of the designed CNN; such a figure has been suitably added in \u201cClassification via CNN\u201d section.__________________________________________________________________________Minor (Abstract) tested some supervised ... -> tested some other supervised ... extracted by the spectrograms -> extracted from the spectrogramsResponse: We have revised both the sentences as in 1) and 2).__________________________________________________________________________Minor (Introduction) \"... on such a fixed repertoire of calls typologies\". I think this should be rephrased. May be \"on a predefined set of call types\"? Please cite our paper Ivanenko et al, 2020, \"Classifying sex and strain from mouse ultrasonic vocalizations using deep learning\", PLOS CB in the introduction. Though we don't use Scattoni classical vocalization types there, we also classify vocalizations using CNNs basing on their spectrogram shape , thus implementing the 'top-down' approach you mentioned. 'Simulation results' - I think the word 'simulation' is misleading here.Response: We have revised both the sentences as in 1) and 3). We have also carefully read the paper suggested in 2) and cited it in the introduction while mentioning the top-down approach.__________________________________________________________________________Minor \"postnatal (PND) 6, 8 ...\" -> \"postnatal day (PND) 6, 8...\" Threshold and hold time parameter values used in Avisoft for vocalization extraction should be provided here.Response: We have revised the sentences as in 1) and inserted the requested parameters in the appropriate section as suggested in 2).__________________________________________________________________________Minor (Description of the experiments) 'the first step consisted into' -> 'the first step consisted of' 'giving this way a statistical soundness' -> 'ensuring the statistical soundness'?Response: We have revised both the sentences as in 1) and 2).__________________________________________________________________________Minor (Feature extraction section) identified by the color...' -> 'identified by the brightness...'Response: We have revised the sentences as in 1).__________________________________________________________________________Minor (Support Vector Machines) 'Maximum confidence strategy was used...' - how the confidence was estimated? SVM provides binary result out of the box, the actual confidence assessment approaches vary.Response: The confidence is directly given by the decision function, which is the signed distance between the tested sample and the separating hyperplane. More precisely, the decision function is computed as \u2211_(i\u2208SV)\u2592\u3016y_i \u03b1_i K+b\u3017, where x is the sample to predict; x_i are the support vectors (SV) that construct the hyperplane; b is the intercept; \u03b1_i are the dual coefficients computed in the SVM optimization problem; y_i are the binary labels corresponding to x_i; K=\u03c6(x_i )^T \u03c6(x) is the employed kernel.In the OVA-strategy, we generate 10 classifiers: the first separates class 1 from the remaining ones, the second does the same for class 2, and so on for all 10 classes. The decision function is computed for each class as reported in the previous equation. The classifier that returns the largest positive value identifies the winner class.AttachmentResponse to Reviewers.docxSubmitted filename: Click here for additional data file. 15 Dec 2020Automatic classification of mice vocalizations using Machine Learning techniques and Convolutional Neural NetworksPONE-D-20-09612R2Dear Dr. Premoli,We\u2019re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.Please, consider minor edits suggested by one of the reviewers.Within one week, you\u2019ll receive an e-mail detailing the required amendments. When these have been addressed, you\u2019ll receive a formal acceptance letter and your manuscript will be scheduled for publication.http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at onepress@plos.org.If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they\u2019ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact Kind regards,Gennady Cymbalyuk, Ph.D.Academic EditorPLOS ONEAdditional Editor Comments :Reviewers' comments:Reviewer's Responses to QuestionsComments to the Author1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the \u201cComments to the Author\u201d section, enter your conflict of interest statement in the \u201cConfidential to Editor\u201d section, and submit your \"Accept\" recommendation.Reviewer #1:\u00a0All comments have been addressedReviewer #2:\u00a0All comments have been addressed**********2. Is the manuscript technically sound, and do the data support the conclusions?The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1:\u00a0YesReviewer #2:\u00a0Yes**********3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1:\u00a0YesReviewer #2:\u00a0Yes**********4. Have the authors made all data underlying the findings in their manuscript fully available?PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data\u2014e.g. participant privacy or use of data from a third party\u2014those must be specified.The Reviewer #1:\u00a0YesReviewer #2:\u00a0Yes**********5. Is the manuscript presented in an intelligible fashion and written in standard English?PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.Reviewer #1:\u00a0YesReviewer #2:\u00a0Yes**********6. Review Comments to the AuthorPlease use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. Reviewer #1:\u00a0The authors carefully addressed all the comments given in the previous review round. I reccommend acceptance of the paper.Some minor grammar/stylistic/logic errors could be considered:line 391 : generating to -> generating of/forline 432: the CNN architecture has outperformed the other standard features-based methods ->the CNN architecture has outperformed the standard features-based methods (CNN is not a feature-based method in the paper ...)line 455: are arranged on the ten columns -> are arranged in ten columnsline 458 : The values into the matrices are normalized -> the values in the matrices ....line 463, 502: on the x axis there are the predicted labels : \"on x axis there are...\" sounds a bit ungrammatical to me; usually they use something like \"x axis refers to ... \" to describe the meaning of axesline 519: The performance showed that by exploiting the whole time/frequency information of the spectrogramleads to significantly higher performance than considering a subset->The performance showed that the exploiting of the whole time/frequency information of the spectrogramleads to significantly higher performance than considering only a subset of numerical featuresline 525: The final set up on an automatic classification method willdefinitely solve the current main problems in USVs manual classification: long time consuming andoperator-dependent.->The final set up of(?) an automatic classification method willdefinitely solve the current main problems of USVs manual classification: its being a time consuming process andoperator bias.Reviewer #2:\u00a0The authors have done a good job responding to the additional reviewers comments. I recommended to accept the paper in the previous revision.**********what does this mean?). If published, this will include your full peer review and any attached files.7. PLOS authors have the option to publish the peer review history of their article (If you choose \u201cno\u201d, your identity will remain anonymous but your review may still be made public.Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.Yes:\u00a0Aleksandr IvanenkoReviewer #1:\u00a0Yes:\u00a0Kevin CoffeyReviewer #2:\u00a0 7 Jan 2021PONE-D-20-09612R2 Automatic classification of mice vocalizations using Machine Learning techniques and Convolutional Neural Networks Dear Dr. Premoli:I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department. onepress@plos.org.If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact plosone@plos.org. If we can help with anything else, please email us at Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staffon behalf ofDr. Gennady Cymbalyuk Academic EditorPLOS ONE"} +{"text": "The metabolic tumor volume (MTV) was calculated using a semi-automatic thresholding method and the average tracer uptake within the MTV was converted to a standard uptake value (SUV).When GB was confirmed on T2- and contrast-enhanced T1-weighted MRI, animals were randomized into a treatment group (n = 5) receiving MRI-guided 3D conformal arc micro-irradiation (20 Gy) with concomitant temozolomide, and a sham group (n = 5). Effect of treatment was evaluated by MRI and [18F]FDG PET (SUVmean x MTV) is superior to MTV only. Using (SUVmean x MTV), [18F]FDG PET detects treatment effect starting as soon as day 5 post-therapy, comparable to contrast-enhanced MRI. Importantly, [18F]FDG PET at delayed time intervals (240 min p.i.) was able to detect the treatment effect earlier, starting at day 2 post-irradiation. No significant differences were found at any time point for both the MTV and (SUVmean x MTV) of [18F]FCho PET.To detect treatment response, we found that for [18F]FDG PET were able to detect early treatment responses in GB rats, whereas, in this study this was not possible using [18F]FCho PET. Further comparative studies should corroborate these results and should also include (different) amino acid PET tracers.Both MRI and particularly delayed [ As a result, for newly diagnosed glioblastoma (GB) patients with a good performance status, the standard of care now includes maximal surgical resection followed by combined external beam RT (60 Gy in 30 fractions) and TMZ fluorodeoxyglucose ([18F]FDG), [18F]Fluoroethyltyrosine ([18F]FET), [18F]fluoroazomycin arabinoside ([18F]FAZA), 3,4-dihydroxy-6-[18F]-fluoro-l-phenylalanine ([18F]FDOPA) and [18F]Fluoromethylcholine ([18F]FCho) [18F]FDG and [18F]FET. [18F]FDG PET measures cellular glucose metabolism as a function of the hexokinase enzyme. However, due to its high uptake in normal brain parenchyma, the localization and the delineation of brain tumors is often difficult [18F]FDG and PET acquisition, the so called \u201cdual phase imaging\u201d FDG such as radiolabeled amino acids were developed showing an increased contrast between brain tumors and normal brain tissue. The diagnostic potential of [18F]FET PET in brain tumors is well documented and the RANO working group has recommended amino acid PET as an additional tool in the diagnostic assessment of brain tumors [18F]FET PET compared with MRI and a promising role for the distinction between tumor recurrence and aspecific post-therapeutic changes has been shown [18F]FAZA as a PET tracer may also have clinical relevance because tumor aggressiveness, failure to achieve local tumor control and an increased rate of tumor recurrence are all associated with hypoxia [18F]FAZA PET is that optimal imaging is performed a few hours post-injection and that the degree of hypoxia can theoretically fluctuate, influenced by therapy and the presence of acute versus chronic hypoxia [18F]FCho PET has been shown to be able to distinguish high-grade glioma, brain metastases and benign lesions and to identify the most malignant areas for stereotactic sampling [et al. concluded that [18F]FCho PET was able to differentiate WHO grade IV from grade II and III tumors, whereas MR spectroscopy differentiated grade III/IV from grade II tumors [18F]FCho PET/CT in the intraoperative management or radio-surgical approaches for glioma has been suggested, including intraoperative guidance in conjunction with MR spectroscopy FCho) \u201324. Currifficult , 24. It imaging\u201d \u201327. Alson tumors . Also, aen shown , 30. Hyp hypoxia , 31, 32. hypoxia . Finally hypoxia , 35. Recsampling . Grech-SI tumors . Recentltroscopy , 40, 41.\u00ae, Surrey, UK). Using the F98 GB rat model and the SARRP, we described and validated magnetic resonance imaging (MRI)-guided 3D conformal arc RT with concomitant chemotherapy to bridge the gap between radiation technology in the clinic and preclinical techniques [18F]FDG and [18F]FCho PET, compared to contrast-enhanced MRI, to detect the early effect of combined radiation and TMZ treatment in the F98 GB rat model. In addition, we also investigated which modality is best suited for the early detection of treatment response.Previously, our group used the orthotopic allograft F98 GB rat model to mimic GB treatment in patients. The GB F98 rat model exhibited features of human GB with regard to its aggressiveness, histological appearance and lack of immunogenicity . To enabchniques . In thisThe study was approved by the Ghent University Ethical Committee for animal experiments (ECD 09/23-A). All animals were kept under environmentally controlled conditions with food and water ad libitum. Follow-up of all animals was done by monitoring their body weight, food, water intake and their activity and normal behavior. The method of euthanasia was a lethal dose of pentobarbital sodium (180 mg/kg). Euthanasia was performed prior to the experimental endpoint if a decline of 20% body weight was observed or when the normal behavior severely deteriorated (e.g. lack of grooming).\u00ae) . Full details of the protocol can be found in our previous publications FDG and [18F]FCho. [18F]FDG scans were performed 2, 5, 9 and 12 days after the start of treatment, while [18F]FCho scans were performed 1, 6, 8 and 13 days after the start of treatment. These time points were arbitrarily chosen because, empirically, GB rats survived approximately 14 days after the start of treatment [18F]FDG and [18F]FCho PET scanning was not possible on the same day. An overview of the complete imaging scheme is shown in The assessment of the biological response was evaluated by small animal PET using FDG and 39.55 \u00b1 0.37 [18F]FCho (mean \u00b1 SE) dissolved in 200 \u03bcL saline). The total acquisition time was 20 min for [18F]FCho PET due the fast kinetics of [18F]FCho and 60 min for conventional [18F]FDG PET. In addition, a 30-min [18F]FDG PET scan was acquired 240 min after [18F]FDG administration (delayed imaging). All PET scans were reconstructed into a 200 \u00d7 200 \u00d7 64 matrix by a 2D Maximum Likelihood Expectation Maximization (MLEM) algorithm using 60 iterations and a voxel size of 0.5 \u00d7 0.5 \u00d7 1.157 mm. Identical reconstruction parameters were applied for [18F]FDG and [18F]FCho PET. The dynamically acquired PET data were reconstructed into 6 \u00d7 20 s, 3 \u00d7 1 min, 3 \u00d7 5 min, 2 \u00d7 20 min time frames for [18F]FDG scans and 6 \u00d7 20 s, 3 \u00d7 1 min, 1 \u00d7 5 min, 1 \u00d7 10 min time frames for [18F]FCho scans.Dynamic PET images were acquired in list mode using a dedicated small animal PET scanner . MTV was defined on the last time frame of the dynamic [18F]FDG PET (40\u201360 min post-injection), the delayed [18F]FDG PET (240 min post-injection) and on the last time frame of the dynamic [18F]FCho scan (10\u201320 min post-injection). First, a circular VOI is manually placed over a region with an increased tracer uptake excluding non-specific uptake, such as uptake in the scalp. Within this VOI, MTV was defined as all voxels with an uptake \u2265 60% and \u2265 50% of the maximum uptake for [18F]FDG and [18F]FCho, respectively. The selection of the thresholds was done arbitrarily and based on visual inspection of the [18F]FDG PET scan 40\u201360 min post-injection, the delayed [18F]FDG PET scan 240\u2013270 min post-injection and the [18F]FCho PET scan 10\u201320 min post-injection, see The metabolic tumor volume (MTV) was calculated based on a semi-automatic thresholding method using the PMOD software and (MTV x SUVmean) were calculated and included in the analysis. The TBRmax was defined as the ratio of the SUVmax of the tumor MTV to the SUVmean of the reference VOI located in the contralateral occipital normal brain region.Injected activity was corrected for decay and residual activity in the syringe. In addition to the MTV, the SUV18F]FCho PET were data from [Clinical MRI and PET images used to compare clinical and preclinical [ata from . The scaata from .18F]FCho uptake due to blood\u2013brain barrier (BBB) breakdown, we performed autoradiography and analyzed EB extravasation of a F98 GB rat tumor on day 16 after inoculation, as described in [\u00ae) dissolved in saline at a concentration of 4 mL per kg of body weight was injected intravenously (t = 0 min). [18F]FCho was injected . At t = 60 min, the rat was euthanized, and dissected rat brains were instantly frozen in isopentane (VWR\u00ae) cooled by liquid nitrogen for 2 min followed by 30 min incubation at -20\u00b0C. The brains were then cut into 20 \u03bcm serial sections using a cryostat , with alternating slides for fluorescent staining and hematoxylin and eosin (H&E) stain. The H&E sections were dried prior to fixation in 4% paraformaldehyde. The slices for autoradiography were placed on a Super Resolution storage phosphor screen (in red lighted room) and incubated for 2.5 h. The film was scanned using the PerkinElmer Cyclone Plus (600 dpi). A picture was taken of the frozen brain tissue (Sony\u00ae), and TRITC (tetramethylrhodamine isothiocyanate) fluorescently labeled sections were imaged with a BD pathway 435 automated imaging system (Becton Dickinson) equipped with a 10\u00d7 objective. A montage of 20\u00d715 images provided a complete overview of the brain section. Using the PMOD software, the HE and AR image were manually co-registered and the tumor volume of interest (VOI) was manually drawn on the HE image and transferred to the AR image. The normal brain VOI consists of a 5 x 5 mm square placed in the contralateral normal brain.To evaluate non-specific FDG, at conventional and delayed time point, and for [18F]FCho PET for [18F]FDG (early and delayed) and [18F]FCho PET are shown in To eliminate the influence of the differences in tumor volumes between individual animals, also MTV values were normalized to the MTV pre-therapy. Evolution of the normalized MTV and (SUV18F]FDG PET was significantly different between both groups on day 5 (p = 0.016). Using delayed [18F]FDG PET imaging, significant differences in MTV were present between both groups on day 9 (p = 0.032) and 12 (p = 0.032). No significant MTV differences were found between control and therapy group for [18F]FCho PET at any time point.The MTV on conventional [18F]FDG PET, was significantly different between the control and treated group on day 5 (p = 0.008) post-irradiation using the last time frame of the dynamic PET acquisition. On delayed [18F]FDG PET a significant difference was found on day 2 (p = 0.032), day 9 (p = 0.032) and day 12 (p = 0.016) post-irradiation. No significant differences were found between control and treated group for [18F]FCho PET at any time point.For conventional [18F]FDG, delayed [18F]FDG and [18F]FCho PET. Evolution of the normalized MTV and for [18F]FDG and [18F]FCho PET in a rat receiving control treatment are shown in In 18F]FCho uptake in the F98 GB tumor and very low uptake in normal brain. The background corrected mean tumor-to-mean normal brain ratio was 3,72 and the max tumor-to-mean normal brain ratio was 6.84 shows clearly necrosis in the tumor core, while this is not present in patient (B). However, both show a heterogeneous [18F]FCho uptake ranging from moderate to moderate intense at the invasion front of the tumor and 372.0 MBq (B) injected activity). Low uptake is noted in the normal brain tissue. In the F98 GB rat tumor, no gross central tumor necrosis is seen on contrast-enhanced T1-weighted MRI (C) and increased [18F]FCho uptake is present only in the upper left margin of the tumor . Surrounding extra-cranial organs, such as the salivary glands and the masticatory muscles, show intense [18F]FCho uptake (D). This is clearly visible on the pre-clinical PET, while in humans this uptake is not visible within the axial brain slice. In (B) and (C-D), the leakage pattern of the gadolinium contrast agent on MRI differs strongly from the [18F]FCho uptake pattern. Diffuse leakage of Gd in the entire tumor volume is seen on MRI (C), while a more localized choline uptake is seen in the upper left margin of the tumor just beneath the skull (D).18F]FDG and [18F]FCho PET, a threshold of \u2265 60% and \u2265 50% of the maximum uptake, respectively, was selected based on visual inspection was superior to MTV alone in detecting early treatment effect is also referred to as total lesion glycolysis (TLG), which is a well-known volumetric parameter that enables to capture the glycolytic phenotype and overall tumor burden. In this study, using (SUVmean x MTV), [18F]FDG PET acquired 40\u201360 minutes post-injection, was able to detect treatment response as early as 5 days post-therapy. Similar results were found when evaluating the changes in contrast-enhanced tumor volume on MRI. Importantly, [18F]FDG PET acquired 4 hours post-injection was able to detect the treatment response even earlier, namely at day 2 post-irradiation and [18F]FET PET have been suggested to be better suited than [18F]FDG for brain tumor imaging and monitoring therapy response in brain tumor patients [18F]FCho PET, first introduced for PET imaging of brain tumors by DeGrado et al. Choline PET, a tumor-to-normal-brain ratio (TBR) \u2264 1.4 might predict a longer overall survival in patients with suspected recurrent glioma after treatment [et al. suggested that there was a good correlation between a change in SUVmax of the tumor volume during RT and response [18F]FCho PET study in childhood astrocytic tumors confirmed the added value of [18F]FCho SUVmax and functional MRI apparent diffusion coefficient values to monitor therapy response FCho PET might be able to detect a treatment-induced diminished cell proliferation rate because this choline PET analogue is a substrate for choline kinase, an enzyme commonly overexpressed in malignant lesions involved in the incorporation of choline into phospholipids, which is an essential component of all cell membranes. In cancer, an increased cellular transport and higher expression of choline kinase leads to an increased uptake of radiolabeled choline [18F]FDG, ameliorating the delineation of tumor boundaries [18F]FCho PET compared to state-of-the art conventional MRI using RANO criteria for early therapy response assessment in GB patients. We found that SUV values were not able to predict response, while (SUVmean x MTV) allowed prediction of therapy response one month after the completion of radiation therapy, however, not earlier than changes of tumor volume derived from contrast-enhanced MRI FCho PET between the control and the treatment group. Based on these results, in rats, [18F]FCho PET was not able to detect early combined radiation and chemotherapy effects after the completion of treatment. We can only speculate about an explanation.In this study, we did not find significant differences at any time point for MTV and reached a maximum within a 5-min window following the injection [18F]FCho, several clinical reports performed emission scanning for 15 min, beginning 5\u201310 min after injection of the tracer [18F]FCho uptake by all types of brain lesions was rapid with minimal changes in uptake activity more than 6 min after administration, except for meningiomas [18F]FCho uptake also reached 80% and 90% of the total activity at 3\u00b14 and 7\u00b16 minutes post injection, respectively [18F]FET FET \u201372. Impo[18F]FET , 73, 74.18F]FDG (and particularly acquired 4 hours post-injection) is preferred over [18F]FCho. Further comparative studies should corroborate these results and should also include (different) amino acid PET tracers.Based on a preclinical rat model for GB and multimodal imaging using MRI and PET with two different tracers, to evaluate early treatment response after combined chemo-radiation therapy, we found that both MRI and PET can be used for this purpose. With regard to the choice of PET biomarker, [S1 Fig(TIF)Click here for additional data file. 5 Jan 2021PONE-D-20-36019Assessment of the effect of therapy in a rat model of glioblastoma using [18F]FDG and [18F]FCho PET compared to contrast-enhanced MRI.PLOS ONEDear Dr. Bolcaen,Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE\u2019s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.The article describes the results of a pre-clinical study in a rat model of glioblastoma using [18F]FDG and [18F]FCho PET compared to contrast-enhanced MRI\"\u00a0 for the early detection of treatment response. The objective of the study is interesting and may help future potential applications regard PET imaging in the field of primary brain tumors. However, the paper needs a revision as defined in the section of comments below.plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.Please submit your revised manuscript by Feb 13 2021 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at Please include the following items when submitting your revised manuscript:A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocolsIf applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see:\u00a0We look forward to receiving your revised manuscript.Kind regards,Pierpaolo AlongiAcademic EditorPLOS ONEAdditional Editor Comments:The article describes the results of a pre-clinical study in a rat model of glioblastoma using [18F]FDG and [18F]FCho PET compared to contrast-enhanced MRI\" for the early detection of treatment response. The objective of the study is interesting and may help future potential applications regard PET imaging in the field of primary brain tumors. I agree with the comments of the two reviewers the paper needs a major revision.The results of choline PET and FDG PET have to be discussed carefully because both radiopharmaceutical agents have limited use in this field. For choline PET, despite the biodistribution of the tracer is fast compared to FDG other studies suggest a start acquisition rapidly after injection and a time between 20 and 50 minutes for late imaging in order to have a good balance of T/B ratio. A unique Cho-PET dynamic acquisition 5-20 minutes after the injection may affect negatively the quality of the images. Please discuss in the discussion and eventually in the limitation of the study.I suggest also to report some references missing of recent representative articles on humans: Eg.https://doi.org/10.1007/s40336-020-00398-6- Vetrano, I.G., Laudicella, R. & Alongi, P. Choline PET/CT and intraoperative management of primary brain tumors. New insights for contemporary neurosurgery. Clin Transl Imaging 8, 401\u2013404 (2020). https://doi.org/10.1007/s40336-020-00389-7- Alongi, P., Quartuccio, N., Arnone, A. et al. Brain PET/CT using prostate cancer radiopharmaceutical agents in the evaluation of gliomas. Clin Transl Imaging 8, 433\u2013448 (2020). - Fraioli F, Shankar A, Hargrave D, Hyare H, Gaze MN, Groves AM, Alongi P, Stoneham S, Michopoulou S, Syed R, Bomanji JB. 18F-fluoroethylcholine (18F-Cho) PET/MRI functional parameters in pediatric astrocytic brain tumors. Clin Nucl Med. 2015 Jan;40(1):e40-5. doi: 10.1097/RLU.0000000000000556. PMID: 25188640.Alongi P, Vetrano IG, Fiasconaro E, Alaimo V, Laudicella R, Bellavia M, Rubino F, Bagnato S, Galardi G. Choline-PET/CT in the Differential Diagnosis Between Cystic Glioblastoma and Intraparenchymal Hemorrhage. Curr Radiopharm. 2019;12(1):88-92. doi: 10.2174/1874471011666180817122427. PMID: 30117406.Journal requirements:When submitting your revision, we need you to address these additional requirements.1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found athttps://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf andhttps://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdfhttp://journals.plos.org/plosone/s/licenses-and-copyright.2. We note that Figure 8 in your submission contains copyrighted images. All PLOS content is published under the Creative Commons Attribution License (CC BY 4.0), which means that the manuscript, images, and Supporting Information files will be freely available online, and any third party is permitted to access, download, copy, distribute, and use these materials in any way, even commercially, with proper attribution. For more information, see our copyright guidelines: We require you to either (1) present written permission from the copyright holder to publish these figures specifically under the CC BY 4.0 license, or (2) remove the figures from your submission:(1) You may seek permission from the original copyright holder of Figure(s) [#] to publish the content specifically under the CC BY 4.0 license.http://journals.plos.org/plosone/s/file?id=7c09/content-permission-form.pdf) and the following text:We recommend that you contact the original copyright holder with the Content Permission Form (http://creativecommons.org/licenses/by/4.0/). Please be aware that this license allows unrestricted use and distribution, even commercially, by third parties. Please reply and provide explicit written permission to publish XXX under a CC BY license and complete the attached form.\u201d\u201cI request permission for the open-access journal PLOS ONE to publish XXX under the Creative Commons Attribution License (CCAL) CC BY 4.0 If you are unable to obtain permission from the original copyright holder to publish these figures under the CC BY 4.0 license or if the copyright holder\u2019s requirements are incompatible with the CC BY 4.0 license, please either i) remove the figure or ii) supply a replacement figure that complies with the CC BY 4.0 license. Please check copyright information on all replacement figures and update the figure caption with source information. If applicable, please specify in the figure caption text when a figure is similar but not identical to the original image and is therefore for illustrative purposes only.http://journals.plos.org/plosone/s/supporting-information.3. Please include captions for your Supporting Information files at the end of your manuscript, and update any in-text citations to match accordingly. Please see our Supporting Information guidelines for more information: [Note: HTML markup is below. Please do not edit.]Reviewers' comments:Reviewer's Responses to QuestionsComments to the Author1. Is the manuscript technically sound, and do the data support the conclusions?The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1:\u00a0YesReviewer #2:\u00a0Partly**********2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1:\u00a0YesReviewer #2:\u00a0No**********3. Have the authors made all data underlying the findings in their manuscript fully available?PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data\u2014e.g. participant privacy or use of data from a third party\u2014those must be specified.The Reviewer #1:\u00a0YesReviewer #2:\u00a0Yes**********4. Is the manuscript presented in an intelligible fashion and written in standard English?PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.Reviewer #1:\u00a0YesReviewer #2:\u00a0Yes**********5. Review Comments to the AuthorReviewer #1:\u00a0In this study, the authors investigated the role of FDG and F-Choline PET compared to MRI for the early detection of treatment response in a glioblastoma rat model obtained with F98 cells. Rats were divided into two groups (control and treated with radiation and Temozolomide) and the response was monitored with MRI and PET with FDG and F-Choline performed at different times points.The text is well organized and methods and results are fully described but there are some points that need to be clarified.Major commentsAbstract1. Line 43, in the abstract authors indicate that F-Choline was performed on day 7 post-treatment but in M&M and in the text is indicated day 8 (line 218).Materials and methods1. Line 231, authors can edit the correct injected dose (mean \u00b1 SE) of FDG and F-Choline because in line 231 authors indicated 37 MBq and in the figure legends authors indicated other specific doses.2. Line 274, the dose of FCho can be edited in MBq, please?3. Lines- 289-292, why these lines are under the paragraph \u201cAutoradiography and Evans Blue (EB) staining\u201d, can authors add another title, please?4. Figure 3, is the same animal? Because the images are different. If not, can authors use the same animal, please?Results5. In table 2 there are only the p-values, can also add the value of each parameter (mean \u00b1 SE), please?6. Figure 4, why FDG MTV values at d2 are so different between control and treated group whereas the volume measured using MRI is closer? The tumor volume of control animals measured using MRI increased along time whereas MTV (both FDG and Choline) remained stable or slight decrease, what is the hypothesis? Authors should discuss.7. On day 9, only 2 control animals performed FDG PET, how is it possible that both MTV and MTV+SUVmean values are significantly different between control and treated group (line 347 and 353)?8. In figure 5 there is represented only a control rat, authors can add a longitudinal figure with a representative treated rat so it is possible compared images of control and treated rats, please?9. Figure 5, what is the color scale for PET ? Can authors also add min and max values on the scale.10. Did authors evaluate post mortem staining for ki67, GFAP, choline kinase?DiscussionEdit discuss on the basis of results (point 6), please.Reviewer #2:\u00a0The authors evaluate the role of FDG-PET and Cho-PET, compared to c.e. MRI, for the early detection of treatment response in murine model of GBM; 5 animals randomizedly received RT plus TMZ, while other 5 no. The treatment effect was evaluated with serial MRI and FDG-PET , and also Cho-PET . The metabolic tumor volume (MTV) was semi-automatically calculatedm and the average tracer uptake within the MTV was converted to a SUV. Using SUVmean x MTV, FDG-PET started to detect treatment's effects at 5 day post-treatment, comparable to c.e. MRI. Moreover, delayed FDG-PET (240 min p.i.) earlier detect such effects (from day 2); on the other hand, no significant differences were found at any time point for both the MTV and (SUVmean x MTV) of Cho-PET. Therefore, the authors concluded that MRI and delayed FDG-PET detect early treatment responses in GB murine model of GBM, whereas these results were not obtained with Cho-PETThe topic is undoubtedly intriguing, but i have some issues:-INTRODUCTIONThe ref 1 is related to 2007 WHO classification; from an epidemiological point of view, it should be better to consider the last CBTRUS report .I suggest also to modify ref. 2 and 3, using a more up-to-date literature reference about glioma management guideline on the diagnosis and treatment of adult astrocytic and oligodendroglial gliomas. Lancet Oncol. 2017 Jun;18(6):e315-e329).Moreover, the study by Stupp in 2005 that showed the role of combined RT-CMT was not the ref n\u00b0 5 but the one published in NEJM :987-96. doi: 10.1056/NEJMoa0433309).It shuld be better to update the references related to the clinical role of ChoPET in brain tumors , due to the increasing interest about such technique.Why the authors selected Cho-PET, instead of [18F]FAZA PET, for example? I think that clarifying the advantages and disadvantages os this choice could increase the informative role of the present work.How where the simple size selected?statistical analysis were performed to selected a population of 10 animals?Finally, the author disclosure a financial support by Lux Luka Foundation, but they must clearly state, according to Journal guidelines, who exactly received fundings, and the role of the sponsor in the study design and analysis.**********what does this mean?). If published, this will include your full peer review and any attached files.6. PLOS authors have the option to publish the peer review history of their article digital diagnostic tool,\u00a0 10 Feb 2021A. Additional Editor Comments:The article describes the results of a pre-clinical study in a rat model of glioblastoma using [18F]FDG and [18F]FCho PET compared to contrast-enhanced MRI\" for the early detection of treatment response. The objective of the study is interesting and may help future potential applications regard PET imaging in the field of primary brain tumors. I agree with the comments of the two reviewers the paper needs a major revision.________________________________________________________________________Comment 1: The results of choline PET and FDG PET have to be discussed carefully because both radiopharmaceutical agents have limited use in this field. For choline PET, despite the biodistribution of the tracer is fast compared to FDG other studies suggest a start acquisition rapidly after injection and a time between 20 and 50 minutes for late imaging in order to have a good balance of T/B ratio. A unique Cho-PET dynamic acquisition 5-20 minutes after the injection may affect negatively the quality of the images. Please discuss in the discussion and eventually in the limitation of the study.The main reason for starting the acquisition early (5\u201310 min) after injection of the tracer is indeed because of the rapid and extensive clearance from the blood after intravenous injection. DeGrado et al. documented that the biodistribution of [18F]FCho changes very slowly after 10-min post-injection (DeGrado TR 2001 and 2002). In 2001, in prostate cancer patients, it was documented that the activity in the prostate reached a maximum within a 5-min window following the injection (DeGrado TR 2001). This was also confirmed by our group investigating the blood kinetics of [18F]FCho in rats, as shown in Figure 8, confirming a fast metabolization with an availability of only 17.5% of intact tracer after 15 min . In multiple clinical reports, emission scanning of the brain was performed for 15 min, beginning 5\u201310 min after injection of the tracer . To confirm this, in a previous publication of our group, we did investigate the optimal timing for imaging brain tumours and other brain lesions with [18F]FCho PET . On the basis of the TACs, PET imaging with [18F]FCho starting within minutes after the administration of the tracer is preferred and uptake by all types of brain lesions was rapid, and minimal changes in uptake activity occurred more than 6 min after the administration of the tracer, except for meningiomas that showed decreasing activity after an early peak . Hence, if discrimination between meningioma and other brain tumours is of concern, both 'early' and 'late' PET imaging could be helpful. Recently, Grkovski M et al. performed a dynamic 40 min [18F]FCho PET in patients with brain metastasis. The percentage of activity due to [18F]FCho in plasma was more stable than in rats: 67\u00b111%, 65\u00b19%, 65\u00b17% and 64\u00b17% at 1, 5, 10 and 30 min after injection. However, intratumor [18F]FCho uptake reached 80% and 90% of the total activity at 3\u00b14 and 7\u00b16 minutes (median 1 and 6 minutes) post injection, respectively (Grkovski M 2020). This confirms other studies that radiolabeled choline uptake is rapid and sustained and appears to reach a plateau faster than [18F]FET (Lohmann P 2015).In contrast with [18F]FDG brain imaging, where dual time point imaging has shown clear advantages to increase the T/B ratio , this is assumed to be less advantageous for [18F]FCho PET since the uptake in normal brain is already low . For the use of [18F]FCho PET for the detection of bone metastasis in prostate cancer patients, delayed imaging is recommended. A significant increase in [18F]FCho accumulation in bone metastases was documented using dual-time-point PET imaging .Based on the above-mentioned we selected a dynamic scan of 20 min for imaging the F98 GB tumor in rats assuming based on the literature that the maximal tumor uptake has been reached by then. Parts of this clarification were added to the discussion: line 567-583.\u25cf Mertens K, Bolcaen J, Ham H, Deblaere K, Van den Broecke C, Boterberg T, De Vos F, Goethals I. The optimal timing for imaging brain tumours and other brain lesions with 18F-labelled fluoromethylcholine: a dynamic positron emission tomography study. Nucl Med Commun. 2012;33:954-9.\u25cf Mertens K, Ham H, Deblaere K, Kalala JP, Van den Broecke C, Slaets D, et al. Distribution patterns of 18F-labelled fluoromethylcholine in normal structures and tumors of the head: a PET/MRI evaluation. Clin Nucl Med. 2012;37:e196-203.\u25cf Mertens K, Acou M, Van Hauwe J, De Ruyck I, Van den Broecke C, Kalala JP, et al. Validation of 18F-FDG PET at conventional and delayed intervals for the discrimination of high-grade from low-grade gliomas: a stereotactic PET and MRI study. Clin Nucl Med. 2013;38:495-500. \u25cf Spence AM, Muzi M, Mankoff DA, O\u2019Sullivan SF, Link JM, Lewellen TK, et al. 18F-FDG PET of gliomas at delayed intervals: improved distinction between tumor and normal gray matter. J Nucl Med. 2004;45:1653-9.\u25cf DeGrado TR, Coleman RE, Wang S, Baldwin SW, Orr MD, Robertson CN, et al. Synthesis and evaluation of 18F-labeled choline as an oncologic tracer for positron emission tomography: initial findings in prostate cancer. Cancer Res 2001;61:110-7.\u25cf DeGrado TR, Reiman RE, Price DT, Wang S, Coleman RE. Pharmacokinetics and radiation dosimetry of 18F-fluorocholine. J Nucl Med. 2002;43:92-6.\u25cf Kwee SA, Wei H, Sesterhenn I, Yun D, Coel MN. Localization of primary prostate cancer with dual-phase 18F-fluorocholine PET. J Nucl Med 2006;47:262-9.\u25cf Kwee SA, Coel MN, Lim J, Ko JP. Combined use of F-18 fluorocholine positron emission tomography and magnetic resonance spectroscopy for brain tumor evaluation. J Neuroimaging. 2004;14:285-9.\u25cf Kwee SA, Ko JP, Jiang CS, Watters MR, Coel MN. Solitary brain lesions enhancing at MR imaging: evaluation with fluorine 18fluorocholine PET. Radiology. 2007;244:557-65.\u25cf Husarik DB, Miralbell R, Dubs M, John H, Giger OT, Gelet A, et al. Evaluation of [(18)F]-choline PET/CT for staging and restaging of prostate cancer. Eur J Nucl Med Mol Imaging. 2008;35:253-63.\u25cf Grkovski M, Kohutek ZA, Sch\u00f6der H, Brennan CW, Tabar VS, Gutin PH, et al. 18F-Fluorocholine PET uptake correlates with pathologic evidence of recurrent tumor after stereotactic radiosurgery for brain metastases. Eur J Nucl Med Mol Imaging. 2020;47:1446-57.\u25cf Schaefferkoetter JD, Wang Z, Stephenson MC, Roy S, Conti M, Eriksson L, et al. Quantitative 18F-fluorocholine positron emission tomography for prostate cancer: correlation between kinetic parameters and Gleason scoring. EJNMMI Res. 2017;7:25.\u25cf Grkovski M, Gharzeddine K, Sawan P, Sch\u00f6der H, Michaud L, Weber WA, et al. 11C-Choline Pharmacokinetics in Recurrent Prostate Cancer. J Nucl Med. 2018;59:1672-8.\u25cf Sutinen E, Nurmi M, Roivainen A, Varpula M, Tolvanen T, Lehikoinen P, et al. Kinetics of [(11)C]choline uptake in prostate cancer: a PET study. Eur J Nucl Med Mol Imaging. 2004;31:317-24.\u25cf Lohmann P, Herzog H, Rota Kops E, Stoffels G, Judov N, Filss C, et al. Dual-time-point O-(2-[(18)F]fluoroethyl)-L-tyrosine PET for grading of cerebral gliomas. Eur Radiol. 2015;25:3017-24. \u25cf Bolcaen J, Lybaert K, Moerman L, Descamps B, Deblaere K, Boterberg T, et al. Kinetic Modeling and Graphical Analysis of 18F-Fluoromethylcholine (FCho), 18F-Fluoroethyltyrosine (FET) and 18F-fluorodeoxyglucose (FDG) PET for the Discrimination between High-grade Glioma and Radiation Necrosis in Rats. PLoS One. 2016;11:e0161845.________________________________________________________________________Comment 2: I suggest also to report some references missing of recent representative articles on humans: Eg.https://doi.org/10.1007/s40336-020-00398-6- Vetrano, I.G., Laudicella, R. & Alongi, P. Choline PET/CT and intraoperative management of primary brain tumors. New insights for contemporary neurosurgery. Clin Transl Imaging 8, 401\u20134 (2020). https://doi.org/10.1007/s40336-020-00389-7- Alongi, P., Quartuccio, N., Arnone, A. et al. Brain PET/CT using prostate cancer radiopharmaceutical agents in the evaluation of gliomas. Clin Transl Imaging 8, 433\u201348 (2020). - Fraioli F, Shankar A, Hargrave D, Hyare H, Gaze MN, Groves AM, Alongi P, Stoneham S, Michopoulou S, Syed R, Bomanji JB. 18F-fluoroethylcholine (18F-Cho) PET/MRI functional parameters in pediatric astrocytic brain tumors. Clin Nucl Med. 2015 Jan;40(1):e40-5. doi: 10.1097/RLU.0000000000000556. PMID: 25188640.-Alongi P, Vetrano IG, Fiasconaro E, Alaimo V, Laudicella R, Bellavia M, Rubino F, Bagnato S, Galardi G. Choline-PET/CT in the Differential Diagnosis Between Cystic Glioblastoma and Intraparenchymal Hemorrhage. Curr Radiopharm. 2019;12(1):88-92. doi: 10.2174/1874471011666180817122427. PMID: 30117406. We agree that these recent references are important and included these in the manuscript. This section was added to the introduction line 150-157:The metabolic information acquired by [18F]FCho PET has been shown to be able to distinguish high-grade glioma, brain metastases and benign lesions and to identify the most malignant areas for stereotactic sampling . Grech-Sollars et al. concluded that [18F]FCho PET was able to differentiate WHO grade IV from grade II and III tumours, whereas MR spectroscopy differentiated grade III/IV from grade II tumours [Grech-Sollars et al. 2019]. Recently, the potential use of [18F]FCho PET/CT in the intraoperative management or radio-surgical approaches for glioma has been suggested, including intraoperative guidance in conjunction with MR spectroscopy .The reference to Fraioli et al. was added in the discussion Line 518 and 523-525: Finally, a [18F]FCho PET study in childhood astrocytic tumors confirmed the added value of [18F]FCho SUVmax and functional MRI apparent diffusion coefficient values to monitor therapy response [Fraioli et al. 2015].Also these recent reference were included in the revised manuscript: - Villena Mart\u00edn M, Pena Pardo FJ, Jim\u00e9nez Arag\u00f3n F, Borras Moreno JM, Garc\u00eda Vicente AM, et al. Metabolic targeting can improve the efficiency of brain tumor biopsies. Semin Oncol. 2020;47:148-54.- Grech-Sollars M, Ordidge KL, Vaqas B, Davies C, Vaja V, Honeyfield L, et al. Imaging and Tissue Biomarkers of Choline Metabolism in Diffuse Adult Glioma: 18F-Fluoromethylcholine PET/CT, Magnetic Resonance Spectroscopy, and Choline Kinase \u03b1. Cancers. 2019;11:1969._______________________________________________________________________B. When submitting your revision, we need you to address these additional requirements.Comment 3: Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found athttps://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf andhttps://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdfUpon submission of the revised manuscript, extra care was taken to meet the PLOS ONE style requirements, including those for file naming. Referring to supplemental materials was adapted and author titels were deleted. The reference style of the added references was adapted in the final manuscript. Figures sizes were adapted to meet the criteria.________________________________________________________________________Comment 4:http://journals.plos.org/plosone/s/licenses-and-copyright.We note that Figure 8 in your submission contains copyrighted images. All PLOS content is published under the Creative Commons Attribution License (CC BY 4.0), which means that the manuscript, images, and Supporting Information files will be freely available online, and any third party is permitted to access, download, copy, distribute, and use these materials in any way, even commercially, with proper attribution. For more information, see our copyright guidelines: We require you to either (1) present written permission from the copyright holder to publish these figures specifically under the CC BY 4.0 license, or (2) remove the figures from your submission:(1) You may seek permission from the original copyright holder of Figure(s) [#] to publish the content specifically under the CC BY 4.0 license.http://journals.plos.org/plosone/s/file?id=7c09/content-permission-form.pdf) and the following text:We recommend that you contact the original copyright holder with the Content Permission Form (http://creativecommons.org/licenses/by/4.0/). Please be aware that this license allows unrestricted use and distribution, even commercially, by third parties. Please reply and provide explicit written permission to publish XXX under a CC BY license and complete the attached form.\u201d\u201cI request permission for the open-access journal PLOS ONE to publish XXX under the Creative Commons Attribution License (CCAL) CC BY 4.0 If you are unable to obtain permission from the original copyright holder to publish these figures under the CC BY 4.0 license or if the copyright holder\u2019s requirements are incompatible with the CC BY 4.0 license, please either i) remove the figure or ii) supply a replacement figure that complies with the CC BY 4.0 license. Please check copyright information on all replacement figures and update the figure caption with source information. If applicable, please specify in the figure caption text when a figure is similar but not identical to the original image and is therefore for illustrative purposes only.Figure 8 is published in our previous PLoS One publication: 2016;11(8):e0161845, Fig S2 and S4. This content is published under CC BY 4.0 license. To the best of our knowledge, an additional permission request is not required. Please let us know if we understood wrong. We changed the caption to clarify the reuse of the figure (line 589-590). The original figures were uploaded as \u2018other\u2019 in the online submission. ________________________________________________________________________http://journals.plos.org/plosone/s/supporting-information.Comment 5: Please include captions for your Supporting Information files at the end of your manuscript, and update any in-text citations to match accordingly. Please see our Supporting Information guidelines for more information: The guidelines were applied for the supporting information, including an in-text citation: S1 Fig.The name of the supporting information figure was matched with the supporting information captions within the manuscript (line 396). A caption was added at the end of the manuscript, including a title (line 819-820).________________________________________________________________________C. REVIEWER 1In this study, the authors investigated the role of FDG and F-Choline PET compared to MRI for the early detection of treatment response in a glioblastoma rat model obtained with F98 cells. Rats were divided into two groups (control and treated with radiation and Temozolomide) and the response was monitored with MRI and PET with FDG and F-Choline performed at different times points.The text is well organized and methods and results are fully described but there are some points that need to be clarified.Major commentsAbstractComment 1. Line 43, in the abstract authors indicate that F-Choline was performed on day 7 post-treatment but in M&M and in the text is indicated day 8 (line 218).This is indeed an error in the abstract and has been modified (line 52). [18F]FCho PET was performed on day 1-6-8-13 as mentioned in M&M line 247 and in Fig 2, Table 1 and Table 2.________________________________________________________________________Comment 2. Line 231, authors can edit the correct injected dose (mean \u00b1 SE) of FDG and F-Choline because in line 231 authors indicated 37 MBq and in the figure legends authors indicated other specific doses.-4We agree to include the mean injected activity of all [18F]FDG and [18F]FCho scans in the M&M. The mean injected activity for all [18F]FDG scans was 37.89 \u00b1 0.35 MBq and for all [18F]FCho scans it was 39.55 \u00b1 0.37 MBq (mean \u00b1 SE). This was added to the manuscript line 263-264. ________________________________________________________________________Comment 3. Line 274, the dose of FCho can be edited in MBq, please?The activity (0.55 mCi) was changed to MBq at line 313.________________________________________________________________________Comment 4. Lines- 289-292, why these lines are under the paragraph \u201cAutoradiography and Evans Blue (EB) staining\u201d, can authors add another title, please?We agree that these lines do not fit under that paragraph. These lines were moved to lines 302-305. ________________________________________________________________________Comment 5. Figure 3, is the same animal? Because the images are different. If not, can authors use the same animal, please?The axial PET images used in Figure 3 were indeed of 3 different rats. We made a new figure including coronal images of an [18F]FDG (early and late) and a [18F]FCho PET of the same rat with a F98 GB tumor. Different thresholds (\u226540-50-60-70 %) are contoured. The figure legend was adapted (line 294-300). _______________________________________________________________________ResultsComment 6. In table 2 there are only the p-values, can also add the value of each parameter (mean \u00b1 SE), please?The mean \u00b1 SE was added to table 2 (page 13-14).________________________________________________________________________Comment 7. Figure 4, why FDG MTV values at d2 are so different between control and treated group whereas the volume measured using MRI is closer? The tumor volume of control animals measured using MRI increased along time whereas MTV (both FDG and Choline) remained stable or slight decrease, what is the hypothesis? Authors should discuss.The following paragraph has been added to the Discussion (line 503-510):In Figure 4 can be observed that tumor volumes measured using contrast-enhanced MRI clearly increase over time in the control group, while metabolic tumor volumes remain more or less stable and also show larger variability. Our hypothesis is that the fast growing tumors, as observed on MRI, are becoming metabolically more heterogeneous tumors. This results in a more heterogeneous tracer uptake within these tumours (e.g. necrotic core vs. viable tumour proliferation and infiltration), which gives rise to higher variations of the measured MTV-values in the control group and that might also explain why the MTV-values are not increasing over time.In addition (not added to the manuscript):For example, the FDG MTV at day 2 is indeed very different between the control and therapy group whereas the difference of the MRI Gd tumor volume is smaller. When taken a closer look to the data, this can be clarified because 2 rats of the DMSO group showed a 12x increase in FDG MTV between pre-therapy and d2, whereas there was only a 2 to 3 fold increase of the MR Gd volume. The 3 other rats of the DMSO group showed a 2 to 3 fold increase of both the MR Gd volume and FDG MTV between pre-therapy and d2. As a result, the SE-values are higher.________________________________________________________________________Comment 8. On day 9, only 2 control animals performed FDG PET, how is it possible that both MTV and MTV+SUVmean values are significantly different between control and treated group (line 347 and 353)?Thank you for this very valuable comment. After re-checking the data and statistical analysis, we observed a mistake in Table 1. On day 9 FDG PET was performed on 4 control animals and not on 2 control animals. We changed this in the table on page 10 and the total scans included was also changed: line 391.________________________________________________________________________Comment 9. In figure 5 there is represented only a control rat, authors can add a longitudinal figure with a representative treated rat so it is possible compared images of control and treated rats, please?Longitudinal FDG and FCho PET/MRI images of treated rats were added to figure 5. The figure legend was adapted. Mean \u00b1 SE injected activity was added of all FDG and FCho PET scans included in the new figure (see line 424-430).________________________________________________________________________Comment 10. Figure 5, what is the color scale for PET ? Can authors also add min and max values on the scale.The images in Figure 5 were created using the PMOD software and the color scale of the images is in kBq/cc (kBq/mL) before extracting data. Only after VOI (MTV) delineation, the uptake values in kBq/cc were extracted in excel to calculate SUV and TBR values. The contrast range of the images selected in Figure 5 was (0-850 kBq/cc) for early FDG, (0-220 kBq/cc) for delayed FDG and (0-350 kBq/cc) for FCho. These were selected manually to obtain an optimal image of the brain and GB tumor uptake. Hence we prefer not to add the different ranges to the color scale.________________________________________________________________________10. Did authors evaluate post mortem staining for ki67, GFAP, choline kinase?These stainings were not performed in this study. We did make a lot of efforts to optimize IHC for staining the choline transporter (CTL1) to correlate with the [18F]FCho PET uptake. However, after mulitple unsuccesfull attempts, the antibody we purchased seemed to only bind human brain tissue and not rat brain tissue .________________________________________________________________________DiscussionEdit discussion on the basis of results (point 6), please.The following paragraph has been added to the Discussion (line 503-510).D. REVIEWER 2Reviewer #2: The authors evaluate the role of FDG-PET and Cho-PET, compared to c.e. MRI, for the early detection of treatment response in murine model of GBM; 5 animals randomizedly received RT plus TMZ, while other 5 no. The treatment effect was evaluated with serial MRI and FDG-PET , and also Cho-PET . The metabolic tumor volume (MTV) was semi-automatically calculatedm and the average tracer uptake within the MTV was converted to a SUV. Using SUVmean x MTV, FDG-PET started to detect treatment's effects at 5 day post-treatment, comparable to c.e. MRI. Moreover, delayed FDG-PET (240 min p.i.) earlier detect such effects (from day 2); on the other hand, no significant differences were found at any time point for both the MTV and (SUVmean x MTV) of Cho-PET. Therefore, the authors concluded that MRI and delayed FDG-PET detect early treatment responses in GB murine model of GBM, whereas these results were not obtained with Cho-PETThe topic is undoubtedly intriguing, but i have some issues:-INTRODUCTION1. The ref 1 is related to 2007 WHO classification; from an epidemiological point of view, it should be better to consider the last CBTRUS report .Reference 1 was changed to the recent publication of Ostrom et al. We included numbers from this work in the introduction, see line 72-74:\u2018In the US, 84,170 new cases of primary brain and other central nervous system tumors are estimated to be diagnosed in 2021. Glioblastoma (GB) has the highest number of cases of all malignant tumors, with 12,970 cases projected in 2021 [1].\u2019________________________________________________________________________2. I suggest also to modify ref. 2 and 3, using a more up-to-date literature reference about glioma management guideline on the diagnosis and treatment of adult astrocytic and oligodendroglial gliomas. Lancet Oncol. 2017 Jun;18(6):e315-e329).We agree that this is an important and more up to date reference, hence we changed previous references 2 and 3 to Weller et al. 2017_______________________________________________________________________3. Moreover, the study by Stupp in 2005 that showed the role of combined RT-CMT was not the ref n\u00b0 5 but the one published in NEJM :987-96. doi: 10.1056/NEJMoa0433309).Reference 5 was changed to this reference of Stupp et al. from NEJM.________________________________________________________________________4. It should be better to update the references related to the clinical role of ChoPET in brain tumors , due to the increasing interest about such technique.We added more recent references related to the clinical role of ChoPET in brain tumors to the manuscript. An extra section was added to the introduction, see line 150-157. The reference to Fraioli et al. was added in the discussion Line 518 and 523-525. We included the following references:- Vetrano IG, Laudicella R, Alongi P. Choline PET/CT and intraoperative management of primary brain tumors. New insights for contemporary neurosurgery. Clin Transl Imaging. 2020;8:401-4. - Alongi, P, Quartuccio, N, Arnone, A, Kokomani A, Allocca M, Nappi G, et al. Brain PET/CT using prostate cancer radiopharmaceutical agents in the evaluation of gliomas. Clin Transl Imaging. 2020;8:433-48. - Fraioli F, Shankar A, Hargrave D, Hyare H, Gaze MN, Groves AM, et al. 18F-fluoroethylcholine (18F-Cho) PET/MRI functional parameters in pediatric astrocytic brain tumors. Clin Nucl Med. 2015;40:e40-5.- Alongi P, Vetrano IG, Fiasconaro E, Alaimo V, Laudicella R, Bellavia M, et al. Choline-PET/CT in the Differential Diagnosis Between Cystic Glioblastoma and Intraparenchymal Hemorrhage. Curr Radiopharm. 2019;12:88-92. - Villena Mart\u00edn M, Pena Pardo FJ, Jim\u00e9nez Arag\u00f3n F, Borras Moreno JM, Garc\u00eda Vicente AM, et al. Metabolic targeting can improve the efficiency of brain tumor biopsies. Semin Oncol. 2020;47:148-54.- Grech-Sollars M, Ordidge KL, Vaqas B, Davies C, Vaja V, Honeyfield L, et al. Imaging and Tissue Biomarkers of Choline Metabolism in Diffuse Adult Glioma: 18F-Fluoromethylcholine PET/CT, Magnetic Resonance Spectroscopy, and Choline Kinase \u03b1. Cancers. 2019;11:1969.______________________________________________________________5. Why the authors selected Cho-PET, instead of [18F]FAZA PET, for example? I think that clarifying the advantages and disadvantages of this choice could increase the informative role of the present work.For many years, the focus of our research group has been the role of various F-18 labeled PET-tracers in neuro-oncology. In comparison with other PET tracers, FDG and non-FDG tracers alike, [18F]FCho has certain advantages of which a very low uptake in normal white and grey matter of the brain is of major interest because it enhances the contrast between tumour and normal brain tissue. Secondly, it is \u2013 at least in Europe - widely available and it is still the tracer of choice in the management of castration resistant prostate cancer for those centers that do not have access to PSMA PET.Although [18F]FCho PET has been studied for glioma imaging before by other groups, the number of studies is still limited compared to the number of publications on amino acid and hypoxia PET biomarkers.Since hypoxia is associated with tumor aggressiveness, radiation resistance and poor prognosis, it is possible that changes in [18F]FAZA or [18F]FMISO uptake between pre- and post-treatment can be used to monitor treatment response. However, only few studies have been performed. One downside is that the degree of hypoxia can theoretically fluctuate, influenced by therapy and the presence of acute versus chronic hypoxia which can influence the reproducibility of hypoxia PET . In 2013, [18F]FAZA and [18F]FDG uptake in the F98 GB rat model was investigated by Belloli et al., however, not specifically for therapy response assessment. It is also noteworthy that there is no current consensus on which tracer is the best for hypoxia imaging. [18F]FAZA has advantages compared to [18F]FMISO due to better pharmacokinetic properties but [18F]FMISO can cross the blood\u2013brain barrier because of its lipophilic nature while [18F]FAZA and [18F]DiFA can not . Another downside of [18F]FAZA is that imaging is optimal 2-3 hours post-injection (with [18F]FMISO up to 4h), making it less convenient to work with. However, we studied [18F]FET and [18F]FAZA PET in another in vivo study by our group with a focus on the feasibility of PET-guided irradiation . Because hypoxia is directly related to radiation resistance, [18F]FAZA PET could be used to guide an additional boost on hypoxic tumor regions as a strategy to overcome radioresistance and increase therapy effectiveness. [18F]FET has already been studied in depth for imaging glioma, however, tumor-to-normal brain contrast is less optimal compared to [18F]FCho.The underlined parts have been included in the introduction (line 142-144). - Verhoeven J, Bolcaen J, De Meulenaere V, Kersemans K, Descamps B, Donche S, et al. Technical feasibility of [18F]FET and [18F]FAZA PET guided radiotherapy in a F98 glioblastoma rat model. Radiat Oncol. 2019;14(1):89.- Mapelli P, Zerbetto F, Incerti E, Conte GM, Bettinardi V, Fallanca F, et al. 18F-FAZA PET/CT hypoxia imaging of high-grade glioma before and after radiotherapy. Clin Nucl Med 2017;42:e525-26.- Belloli S, Brioschi A, Politi LS, Ronchetti F, Calderoni S, Raccagni I,et al. Characterization of biological features of a rat F98 GBM model: a PET-MRI study with [18F]FAZA and [18F]FDG. Nucl Med Biol. 2013;40:831-40.- Hirata K, Yamaguchi S, Shiga T, Kuge Y, Tamaki N. The Roles of Hypoxia Imaging Using 18F-Fluoromisonidazole Positron Emission Tomography in Glioma Treatment. J Clin Med. 2019;8:1088. - M\u00f6nnich D, Troost EG, Kaanders JH, Oyen WJ, Alber M, Thorwarth D. Modelling and simulation of the influence of acute and chronic hypoxia on [18F]fluoromisonidazole PET imaging. Phys Med Biol. 2012;57:1675-84.________________________________________________________________________6. How where the simple size selected?statistical analysis were performed to selected a population of 10 animals?Using statistical power analysis based on ANOVA using 2 group, 5 repeated measurements, an alpha-value of 0.05, a power of 0.80 and an effect size of 0.4, a total sample size of 10 animals was calculated.________________________________________________________________________7. Finally, the author disclosure a financial support by Lux Luka Foundation, but they must clearly state, according to Journal guidelines, who exactly received fundings, and the role of the sponsor in the study design and analysis.Lux Luka Foundation did indeed support this study financially. Funds were received by Prof. I Goethals and Prof. T Boterberg. The sponsor did not play a role in study design and analysis. The funding source was not included in the Acknowledgements section in the manuscript, according to the Journal\u2019s guidelines. However, this information was added in the cover letter.AttachmentResponse to Reviewers.docxSubmitted filename: Click here for additional data file. 22 Feb 2021Assessment of the effect of therapy in a rat model of glioblastoma using [18F]FDG and [18F]FCho PET compared to contrast-enhanced MRI.PONE-D-20-36019R1Dear Dr. Bolcaen,We\u2019re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.Within one week, you\u2019ll receive an e-mail detailing the required amendments. When these have been addressed, you\u2019ll receive a formal acceptance letter and your manuscript will be scheduled for publication.http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at onepress@plos.org.If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they\u2019ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact Kind regards,Pierpaolo AlongiAcademic EditorPLOS ONEAdditional Editor Comments :Reviewers' comments: 23 Feb 2021PONE-D-20-36019R1 18F]FDG and [18F]FCho PET compared to contrast-enhanced MRI. Assessment of the effect of therapy in a rat model of glioblastoma using [Dear Dr. Bolcaen:I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department. onepress@plos.org.If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact plosone@plos.org. If we can help with anything else, please email us at Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staffon behalf ofDr. Pierpaolo Alongi Academic EditorPLOS ONE"} +{"text": "There are errors in the Funding Statement. The publisher apologizes for these errors. The correct Funding statement is: This work was supported by Fondazione Celiachia Onlus, Italy, Grant n\u00b0 046_FC_2013."} +{"text": "A step-by-step protocol of how to implement Bayesian multilevel model analysis with social data and how to interpret the result is presented. The article used a dataset regarding religious teachings and behaviors of lying and violence as an example. An analysis is performed using R statistical software and a bayesvl R package, which offers a network-structured model construction and visualization power to diagnose and estimate results. Specifications tableIn social sciences, the persistence of 'stargazing', p-hacking, and HARKing issues has currently led to a severe reproducibility crisis in which 70% of researchers have failed to reproduce the experiments of other scientists The analysis was done using the bayesvl R package (version 0.8.5) in the R statistical software (version 3.6.2) R> data(Legends345)R> data1 <- Legends345R> head(data1)Hereafter, we use one of our latest research studies as an example for performing Bayesian multilevel analysis with social data \u2022\"Lie\": whether the main character lies\u2022\"Viol\": whether the main character employs violence\u2022\"VB\": whether the main characters' behaviors express the value of Buddhism\u2022\"VC\": whether the main characters' behaviors reflect the value of Confucianism\u2022\"VT\": whether the main characters' behaviors express the value of Taoism\u2022\"Int1\": whether there are interventions from the supernatural world\u2022\"Int2\": whether there are interventions from the human world\u2022\"Out\": whether the outcome of a story is favorable for its main charactersEven though there are 25 binary variables, of which only eight variables are employed in this article:First, we establish three different directed acyclic graphs (DAGs), or so-called \"relationship trees,\" from simple to more complex ones, based on the dataset mentioned above.The first and the most straightforward \"relationship tree\" exemplified examines the determinants of the behaviors of lying and violence on the outcome of the main character see .Fig. 1ThR> library(bayesvl)R> model1 <- bayesvlR> model1 <- bvl_addNodeR> model1 <- bvl_addNodeR> model1 <- bvl_addNodeTo construct the \"relationship tree\" in Because the statistical distribution of all employed variables is binomial, we set \"binom\" in the function. Besides binomial distribution, the package also provides various types of statistical distribution for the types of data, namely: normal distribution \u2013 \"norm\", categorical distribution \u2013 \"cat\", Bernoulli distribution \u2013 \"bern\", Student's t-distribution \u2013 \"student\", Poisson distribution \u2013 \"pois\", and so on.R> model1 <- bvl_addArcR> model1 <- bvl_addArcAfter loading all the variables into the \"relationship tree\", the next step is to grant the regression type to the connection between the independent variables \"Lie\" and \"Viol\" and the dependent variable \"O\" by employing the function bvl_addArc. The model can be set as the fixed effect type by adding a \"slope\" into the command:R> bvl_stanPriors(model1)a_O ~ normalb_Viol_O ~ normalb_Lie_O ~ normalIn Bayesian inference, the posterior probability is estimated from a prior probability and a \"likelihood function\" derived from a statistical model for the observed data. Therefore, setting prior distribution is critical before fitting the model. The prior distribution can be determined based on previous empirical findings, researcher's past experience and personal intuition, or expert opinion R> bvl_bnPlot(model1)Since the prior distribution was not set in bvl_addArc, the package automatically set prior distribution of b_Viol_O as default distribution which is normal. Eventually, the function bvl_bnPlot can help produce the graphical network of the constructed model see .R> bvl_bR> summary(model1)Model Info:nodes: 3arcs: 2scores: NAO\u00a0+\u00a0b_Lie_O * Lie\u00a0+\u00a0b_Viol_O * Violformula: O ~ a_Estimates:model is not estimated.To check the structure and mathematical form of the model, one can use the function summary:R> model2 <- bayesvlR> model2 <- bvl_addNodeR> model2 <- bvl_addNodeR> model2 <- bvl_addNodeR> model2 <- bvl_addNodeR> model2 <- bvl_addNodeR> model2 <- bvl_addNodeR> model2 <- bvl_addNodeR> model2 <- bvl_addNodeThe second \"relationship tree\" is designed to estimate the impact of violent behavior and its interaction effect with religious values on the outcome of the main character see . SimilarThe variables \"B_and_Viol\", \"C_and_Viol\", and \"T_and_Viol\" are the interaction variables between the act of violence and the value of Buddhism, Confucianism, and Taoism, respectively. The independent interaction variables, represented by the green nodes, can be subsequently created from two normal independent variables, represented by the blue nodes. Unlike the normal variable \"Viol\" defined as \"binom\", or binomial, the interaction variables are defined as \"trans\", or interaction/transformed. It is noteworthy that the \"trans\" variable does not have a particular distribution but depends on the interaction of two normal variables through applying \" * \" or \"\u00a0+\u00a0\" operator. To standardize, we call normal independent variables as observation data and interaction variables as transformed data from now on.R> model2 <- bvl_addArcR> model2 <- bvl_addArcThe dash-line arrow demonstrates the relation between the transformed data and the observation data see . The valR> model2 <- bvl_addArcR> model2 <- bvl_addArcR> model2 <- bvl_addArcR> model2 <- bvl_addArcThe model can be set as the fixed effect type by adding \"slope\" into the command:a_O ~ normalb_Viol_O ~ normalb_B_and_Viol_O ~ normalb_C_and_Viol_O ~ normalb_T_and_Viol_O ~ normalThe prior distributions of model 2 are also set as default:R> bvl_bnPlot(model2)R> summary(model2)Model Info:nodes: 8arcs: 9scores: NAO\u00a0+\u00a0b_B_and_Viol_O * VB*Viol\u00a0+\u00a0b_C_and_Viol_O * VC*Viol\u00a0+\u00a0b_T_and_Viol_O * Viol*VTformula: O ~ a_Estimates:model is not estimated.Eventually, the function bvl_bnPlot and summary can help produce the graphical network see and the One can create a much more complex model of multilevel regression analysis, while only following a similar procedure with two models mentioned above and employing some additional functions. The primary purpose of the third exemplary \"relationship tree\" is to explore the impacts of lying and violence behaviors, their interaction with religious values, and intervention from the supernatural or human world on the outcome of the main character see .Fig. 5ThInt1_or_Int2\u00a0=\u00a0(Int1\u00a0+\u00a0Int2 > 0 ? 1: 0)To construct the \"relationship tree\" illustrated in R> model3 <- bvl_addNode\", out_type\u00a0=\u00a0\"int\", lower\u00a0=\u00a00, test\u00a0=\u00a0c) fun\u00a0=\u00a0\"({0}> 0 ? 1: 0)\" is equivalent to the conditional algorithm shown above, while out_type stands for the property of the output, such as \"int\" (integer) and \"real\" . The parameter test\u00a0=\u00a0c helps to insert the code computing \u201cfixed predicted outcome\u201d when Int1_or_Int2\u00a0=\u00a00 and Int1_or_Int2\u00a0=\u00a01. The value of transformed data \"Int1_or_Int2\" is defined based on the values of observational data \"Int1\" and \"Int2\" through the mathematical operator \"\u00a0+\u00a0\":R> model3 <- bvl_addArcR> model3 <- bvl_addArcTherefore, the command to create the node of \"Int1_or_Int2\" is augmented as follows:R> model3 <- bvl_addArcR> model3 <- bvl_addArcR> model3 <- bvl_addArc\", \"sigma_ ~ normal\"))+ priors\u00a0=\u00a0For completing the \"relationship tree\" construction, the last step is to connect two observational data \"Lie\" and \"Viol\" as well as other transformed data to the outcome \"O\". Like previous commands, the function bvl_addArc is used, but \"trans\" is replaced by \"slope\" (fixed effect) or \"varint\" (varying intercept), to convert the relationships between \"O\" and other nodes into regression relationships. There are four fundamental types of statistical model integrated in the bayesvl package: fixed-effect model (\"slope\"), varying-intercept model (\"varint\"), varying-slope model (\"varslope\"), and mixed-effect model (\"varpars\").c\", \"sigma_ ~ normal\") into the function bvl_addArc. Similarly, this method can be applied to change the prior distribution of other relationships by using the prefix a0_, b_, or sigma_, depending on the relationship type. Besides normal distribution, other kinds of distribution can also be implemented for setting up prior distribution by replacing \"normal\" by the name of the designated distribution . The prior distribution of each path can be checked by typing:R> bvl_stanPriors(model3)b_B_and_Viol_O ~ normalb_C_and_Viol_O ~ normalb_T_and_Viol_O ~ normalb_Viol_O ~ normalb_B_and_Lie_O ~ normalb_C_and_Lie_O ~ normalb_T_and_Lie_O ~ normalb_Lie_O ~ normala0_Int1_or_Int2 ~ normalsigma_Int1_or_Int2 ~ normalu_Int1_or_Int2 ~ normalThe first and second commands are to create the regression relationships of the outcome with observational and transformed data, respectively, employing a fixed-effect model, while the third command is to create the regression relationship between the outcome and transformed data employing a varying-intercept model. In model 3, the prior distribution of all the paths from observational and transformed nodes to the outcome node is set as default, except for the path from \"Int1_or_Int2\" to \"O\". The prior distributions of the relationship between \"Int1_or_Int2\" and \"O\" is set by adding the code priors\u00a0=\u00a0R> bvl_bnPlot(model3)Eventually, the function bvl_bnPlot can help produce the graphical network of the constructed model see .R> bvl_bR> bvl_formulaB_and_Lie ~ VB*LieR> bvl_formulaInt1_or_Int2 ~ (Int1+Int2 > 0 ? 1: 0)One can also check the mathematical construct of each transformed data in the \"relationship tree\" above by using the function bvl_formula, like the following examples:R> summary(model3)Model Info:nodes: 15arcs: 23scores: NAb_C_and_Viol_O * VC*Viol\u00a0+\u00a0b_T_and_Viol_O * VT*Viol\u00a0+\u00a0b_Viol_O * Viol\u00a0+\u00a0b_B_and_Lie_O * VB*Lie\u00a0+\u00a0b_C_and_Lie_O * VC*Lie\u00a0+\u00a0b_T_and_Lie_O * VT*Lie\u00a0+\u00a0b_Lie_O * Lie\u00a0+\u00a0a_Int1_or_Int2[(Int1+Int2 > 0 ? 1: 0)]formula: O ~ b_B_and_Viol_O * VB*Viol\u00a0+\u00a0To check the structure and mathematical form of the model, one can use the function summary:Estimates: model is not estimated!R> model_string <- bvl_model2Stan(model3)R> cat(model_string)Before fitting the model using MCMC simulation, one needs to generate the Stan code in R. Because the bayesvl package provides an automatic generation of Stan code, one can use the following commands:R> model3 <- bvl_modelFitR> summary(model3)Model Info:nodes: 15arcs: 23scores: NAb_C_and_Viol_O * VC*Viol\u00a0+\u00a0b_T_and_Viol_O *VT*Violformula: O ~ b_B_and_Viol_O * VB*Viol\u00a0+\u00a0b_Viol_O * Viol\u00a0+\u00a0b_B_and_Lie_O * VB*Lie\u00a0+\u00a0b_C_and_Lie_O * VC*Lie\u00a0+\u00a0b_T_and_Lie_O * VT*Lie+ b_Lie_O * Lie\u00a0+\u00a0a_Int1_or_Int2[(Int1+Int2 > 0 ? 1: 0)]+ Estimates:Inference for Stan model: d4bbc50738c6da1b2c8e7cfedb604d80.4 chains, each with iter=5000; warmup=2000; thin=1;post-warmup draws per chain=3000, total post-warmup draws=12,000.The model created from the \"relationship tree\" can be fitted with MCMC simulation using the function bvl_modelFit. The structure of the function bvl_modelFit is partly dissimilar with other currently existent Bayesian analysis packages because it does not require users to construct conventional mathematical relationships among variables as well as set up the prior distribution for each relationship. One only need to input the name of constructed \"relationship tree\", the dataset, and mandatory set-up for MCMC simulation. As the bayesvl package was coded utilizing the No-U-Turn Sampler (NUTS) sampler \u202fThe model is fitted using four chains, each with 5000 iterations of which the first 2000 are for warmup, resulting in a total of 12,000 post-warmup posterior samples. In general, the model's simulated results show a good convergence based on two standard diagnostics of MCMC simulation, n_eff, and Rhat. The n_eff represents the effective sample size, which is the number of iterations needed for effective independent samples W is the within-sequence variance.Where One can aesthetically visualize the convergence diagnostics, posterior distribution, and estimated results. The function bvl_plotTrace can generate the trace plots of the constructed model.R> bvl_plotTrace(model3)R> bvl_plotAcfstx is the sampled value of x at iteration t, T represents the total number of sampled values, and The mathematical formula for the autocorrelation parameter for lag\u00a0=\u00a0L is displayed below:R> bvl_plotGelmans (model3)Measuring how much variance there is between chains relative to how much variance there is within chains is another idea to check the convergence. If the average difference between chains is similar to average difference within chains (when Rhat\u00a0=\u00a01.0), the chains are well convergent. Nevertheless, the relative value might increase (when Rhat > 1.0) and indicates the less convergent tendency between chains, if there appears at least on orphaned or stuck chain R> bvl_plotParams Besides the mean and standard deviation of the posterior distribution summarized in the model fit above, one can visually present the estimated posterior distribution of every variable coefficient through histograms. The visualization can be made using the function bvl_plotParams. We visualize the estimated posterior distribution of every variable in the constructed model in four rows and three columns with the Highest Posterior Distribution Intervals (HPDI) at 89% see . The defR> bvl_plotIntervals)R> bvl_plotDensity)There are also other built-in alternatives to visually present the estimated results after simulation, such as bvl_plotIntervals and bvl_plotDensity. The bvl_plotIntervals function helps visualize the coefficients and their interval, while the bvl_plotDensity function helps plot the posterior probability density of coefficients. The results can be plotted \"all-in-one\" or selectively by both functions. The following commands are to visualize the interval see and the R> bvl_plotDensity2dThe comparison between two different coefficients' distribution of posteriors can be plotted by the following code see :R> bvl_pbayesvl R package for social data analysis also provides the opportunity to construct a \"relationship tree\" among variables intuitively and graphically visualize simulated posterior, especially in the age of Big Data Recently, the reproducibility crisis and the problems of 'stargazing', p-hacking, or HARKing in statistical analysis have required the scientific community to be more rigorous in conducting research and find solutions for the persistent statistical issues. Thus, the method paper proposes Bayesian analysis as a substitution for the conventional frequentist approach. Bayesian statistics have the advantages of treating all unknown quantities probabilistically and incorporating prior knowledge or belief of scientists into the model as an alternative approach for frequentist analysis in social sciences. The usage of the The authors declare that they have no known competing for financial interests or personal relationships that could have appeared to influence the work reported in this paper."} +{"text": "A second monoclonal antibody\u2014ATA 842\u2014by Atara Biotherapeutics increased muscle mass and strength, as well as insulin sensitivity in old mice over a period of 4\u00a0weekshttps://www.pfizer.com/news/press\u2010release/press\u2010release\u2010detail/pfizer_terminates_domagrozumab_pf_06252616_clinical_studies_for_the_treatment_of_duchenne_muscular_dystrophy). However, in the mouse DMD mdx model, the mouse analogue of domagrozumab\u2014mRK35\u2014significantly increased body weight, muscle weights, grip strength, and ex vivo force production in the extensor digitorum longus (EDL) muscle.Myostatin\u2010targeting antibodies and soluble ActRIIB to block atrophic signalling in skeletal muscle have been studied extensively in animal models and human trials with varying success. In a progeric mouse model, the soluble ActRIIB\u2010Fc improved muscle mass and delayed morbidity.Journal of Cachexia, Sarcopenia and Muscle, Rooks et al.In this issue of the In general, there seems to be no direct relationship between muscle mass and strength,The authors have no conflict of interest regarding the subject of this editorial."} +{"text": "In this article, a Siamese network is applied to the drill wear classification problem. For furniture companies, one of the main problems that occurs during the production process is finding the exact moment when the drill should be replaced. When the drill is not sharp enough, it can result in a poor quality product and therefore generate some financial loss for the company. In various approaches to this problem, usually, three classes are considered: green for a drill that is sharp, red for the opposite, and yellow for a tool that is suspected of being worn out, requiring additional evaluation by a human expert. In the above problem, it is especially important that the green and the red classes not be mistaken, since such errors have the highest probability of generating financial loss for the manufacturer. Most of the solutions analysing this problem are too complex, requiring specialized equipment, high financial investment, or both, without guaranteeing that the obtained results will be satisfactory. In the approach presented in this paper, images of drilled holes are used as the training data for the Siamese network. The presented solution is much simpler in terms of the data collection methodology, does not require a large financial investment for the initial equipment, and can accurately qualify drill wear based on the chosen input. It also takes into consideration additional manufacturer requirements, like no green-red misclassifications, that are usually omitted in existing solutions. Drill wear state recognition belongs to the larger group of problems called tool condition monitoring, which deals with the evaluation of different machine parts\u2019 condition, as well as determining how long they can be used in the production process. Depending on the properties of each tool, as well as the requirements of the final product, different signals can be recorded and later tested using various methods, to obtain the final evaluation. Quite a few procedures in this direction also deal with the main topic of this paper, which is drill wear state recognition. From the manufacturer\u2019s point of view, when the drill starts to become dull, it should be replaced as quickly as possible. Extending the use time of such a tool can result in poor product quality and therefore generate financial loss for the company. Manual evaluation of the drill state is possible and was initially done during the production process, but this is very time consuming, resulting in the prolongation of the entire procedure. A faster and more automated approach was needed, which resulted in extensive research on this subject. For example, one of the existing solutions focuses on measuring tool wear using two approaches: conventional methods and estimation with a customized software combining artificial neural networks and flank wear image recognition . In thisExisting solutions vary greatly in their approach, especially the data collection methodology. As is often the case, especially when it comes to the usage of specialized equipment, the most visible advancements have been made in medicine. For example, in , the autThe solution presented in this paper takes into account different approaches to deep learning in general single-lens reflex digital camera with a 35.9 \u00d7 24.0 mm CMOS image sensor. The entire process was performed in cooperation with the Institute of Wood Sciences and Furniture at Warsaw University of Life Sciences, Poland. For test purposes, a standard CNC vertical machine centre was used. Drilling was performed on a standard, melamine-faced chipboard , which is typically used in the furniture industry. The dimensions of the test piece were 300 \u00d7 35 \u00d7 18 mm. A regular, Faba WP-01 double-blade drill for through drilling equipped with a tungsten carbide tip was used. The drill\u2019s overall length was 70 mm, with a shank length equal to 25 mm, a flute length of 40 mm, a shank diameter of 10 mm, and a 12 mm drill diameter. The clearance angle on the drill face was \u221215.45 degrees. The rake angle was equal to 0 degrees, and the helix angle for the tool used was 15 degrees. The images of the equipment used are presented in The data set used in the current experiments was similar to that in the previous works ,15,16,17Usually, three classes are used in drill wear state recognition: red, green, and yellow. In this case, the obtained samples were divided and labelled manually, using the drill wear rate. For the manual evaluation of the drill state, external corner wear (W(mm)) was adopted as the main condition indicator and was periodically monitored using a standard workshop microscope . Based on the obtained values, three classes for drill wear were selected: green for W < 0.2 mm, yellow for W in the range between 0.2 and 0.35 mm, and red for W > 0.35 mm. Those classes were also used for the drill wear definition in the current, automated approach. In the presented case, the yellow class was used mainly as a buffer for the manufacturer. In the case of the furniture industry, depending on the type of elements produced, different hole qualities can be acceptable. In this case, depending on the manufacturer\u2019s preferences, the yellow class can later be assigned either to the green or red classes in the final production, hence expanding the overall method\u2019s customizability. Example images representing different drill wear classes are presented in The original data set contained significantly more examples for the green class than the remaining two (yellow and red). Since CNN was used as a base network for the presented solution, it was not desirable for the data set to be imbalanced in such a way . To correct this, data augmentation methodologies were used, to ensure an even representation of each class . With the data balanced, the training process should not favour any of the classes. Initial operations performed on the data samples also included resizing each of the images on the fly to a size equal to 64 \u00d7 64 \u00d7 3 pixels. The training input was also normalized by dividing each value by 255, to ensure that they were in the range. Since 5-fold cross-validation was used, the input data were split between the 5 folds, and each of them was additionally divided into two subsets, the first for training and the second one for validation. The structure of each fold is shown in Siamese networks are novel algorithms used in image recognition. The first approaches with this type of procedure focused specifically on face recognition. One of the first applications was a verification system for identifying workers in a building. When it comes to this problem in general, there are two main areas to consider: verification of whether a person in the current image is one stored in a database under a specified ID and recognizing if the person from the input image is one of those stored in the original database. Especially in systems with large amounts of users , accuracy is a very important factor. While having a 99% recognition rate might be acceptable for other applications, it is not the case here. Even if such a system has a 1% error rate, with 100 people in the database, the possibility of not recognizing the current person correctly is still quite high. Additionally, for most cases with face recognition, the algorithm needs to be able to recognize the person while using a single image (the one-shot learning problem). Using CNN for such an approach is not good enough, since firstly, the amount of training data is minimal, and secondly, each time a new person is added to the system, the network would require retraining. This is where the approach used in Siamese networks has the advantage.--f. In that case, if the difference between two images is greater than the set threshold, the person would be classified as different, and as the same in case of this value being below that threshold. In general, it can be described as:To be able to calculate this distance between the input images, both of them are encoded using identical CNN networks. They are then represented as feature vectors instead of the usual classification. In general, what is done at this point is that instead of using the final classifier from the CNN (or other network), the entire process stops at one of the embedding layers . By using this approach, two different, comparable encodings of the images can be obtained, and the distance between them can be measured . The idea of first launching two identical CNN networks to produce feature vectors and secondly using those vectors to calculate the difference measure between images is the basis of the Siamese network architecture .The Siamese network is a good example of a solution that can distinguish between instances of different classes and specifically determine if the image that is provided as the input is the same as the one representing the original class. In terms of face recognition, it would determine if the same person is in the picture. In the case of the solution presented here, with some additional modifications, it should point to which drill wear class the provided example belongs.-\u03b1 is a margin that is enforced between positive and negative pairs-\u03c4 is the set of all possible triplets in the image set and has cardinality N.To train networks used for such recognition, a few steps are required, as well as a definition of the function used to distinguish between positive and negative examples of each class. The presented solution needs to be able to do two things: recognize the same class in two different images and notice that the presented class is different than the one to which it is compared. To achieve that, the following images are required: the first one, containing the element representing a single class , the positive image example, containing the same class , and the negative image, with a different class . When the distance between those images is calculated (from the obtained image representation), the ideal outcome would produce results for which the distance between the images containing the element with the same class is lower than in the case of the images containing elements representing different classes. What is more important for this approach to work accurately is that this distance needs to be significant, reaching at least some predefined margin . The above relation can be described using the following equations:At this point, the loss that needs to be minimized has theAlgorithm 1: Siamese network training.Step 1: Generating the training setfor each input sample do\u2003Using knowledge about the data set to find a set of samples such that each sample \u2003similar to \u2003Pair sample \u2003are similar, end forCombine all the pairs to form the labelled training set.Step 2: Trainingwhile Convergence not reached dofor each pair do\u2003\u2003\u2003if \u2003\u2003if end for\u2003end whileThe margin parameter (\u03b1) is a hyperparameter that needs to be manually adjusted to each classification problem. In the case of the topic described in this paper, one type of positive pair (with label y = 1) and two types of negative pairs were used in that process . Since Siamese networks were originally for the face recognition problem, using a similar approach for the drill wear evaluation considered in this work, some adjustments were required. During the training process, first, the set of examples was created, where each example would contain two samples: anchor and either a positive (P) or a negative (N) example. In this case, instead of a single image that can either be the same or different , a total of three classes were considered. First would be the positive example, with the same class as the anchor. In this case, two negative examples can be generated, containing either of the remaining classes.The approach first divides the entire data set into positive and negative pairs. In the case of negative examples, to increase the diversity of the training set, the class is randomly chosen from the two that are different than the one to which the anchor belongs. Each of the initial images will generate two pairs used for training: one positive, where the anchor is paired with the image of the same class, and one negative, in which the anchor will be paired with an example from a different class. To calculate the distance between the images in consecutive samples, the contrastive loss function is used (see ).Algorithm 2:\u00a0Network training algorithm used for the drill wear recognition\u00a0problem.Create positive and negative pairs:for each of initial D = 3 classes dofor all images in each class do\u2003\u2003\u2003A = current image with index = i\u2003\u2003P = next image from the same class with index = i + 1\u2003\u2003PairPositive = with label = 1\u2003\u2003Randomly choose one of the remaining classes (different than the current one)\u2003\u2003Choose negative image N with index = i\u2003\u2003PairNegative = with label = 0end for\u2003end forCalculate the contrastive loss function:for each created pair do\u2003Contrastive loss = end forReturn classificationAs a base network for the learning process, CNN is used, with three convolutional layers. The detailed structure of the model prepared for the problem chosen as a main topic of this paper is outlined in Listing 1Model: \u201cCNN\u201d_________________________________________________________________Layer (type) Output Shape Param #=================================================================conv2d_1 (Conv2D) 1792_________________________________________________________________activation_1 (Activation) 0_________________________________________________________________max_pooling2d_1 0_________________________________________________________________dropout_1 (Dropout) 0_________________________________________________________________conv2d_2 (Conv2D) 36,928_________________________________________________________________activation_2 (Activation) 0_________________________________________________________________max_pooling2d_2 0_________________________________________________________________dropout_2 (Dropout) 0_________________________________________________________________conv2d_3 (Conv2D) 18,464_________________________________________________________________activation_3 (Activation) 0_________________________________________________________________max_pooling2d_3 0_________________________________________________________________dropout_3 (Dropout) 0_________________________________________________________________flatten_1 (Flatten) 0_________________________________________________________________dense_1 (Dense) 147,584_________________________________________________________________dropout_4 (Dropout) 0_________________________________________________________________dense_2 (Dense) 6450=================================================================Total params: 211,218Trainable params: 211,218Non trainable params: 0_________________________________________________________________During previous experiments ,15,16,17In the current approach, the Siamese network, adjusted to the presented classification problem, was used. To evaluate the obtained results, additional algorithms were implemented, choosing procedures that were successful in previous experiments, but similarly adjusting them to include the new requirement of minimizing the number of misclassifications between the red and green classes.Algorithm 3:\u00a0Accuracy function used for algorithm\u00a0evaluation.\u2003Require: \u2003\u2003Compute classification accuracy with a fixed threshold on image distances\u2003\u2003Return K.mean))\u2003\u2003model.compileDuring the experiments, five-fold cross-validation was used , an early stopping mechanism was used. The patience parameter was used, set at five epochs, meaning that if during that time, there was no improvement to the solution accuracy, the training process was stopped, and the best obtained model was saved. Since the data used during the experiments were stored in a time series manner , it was additionally incorporated in the overall approach, to try and improve the algorithm\u2019s accuracy. Instead of a single image, sets of consecutive images of different lengths were used for the training process. With such an approach, the algorithm should be able to learn how hole edges change, while the drill is steadily dulling. The window parameter was incorporated into the solution, testing sequences of 5, 10, 15, and 20 images, with no window approach used as the baseline . The first solution, which did not use any window, obtained an overall accuracy of 67% , as well as the classification methodology . Since in previous experiments, some of the algorithms were tested and performed well for the overall accuracy parameter, the same algorithms were used for the current comparison. The final set contained the VGG19 pretrained network, 2 ensemble algorithms, the first using 5 and the second using 10 random VGG16 networks, the CNN model trained from scratch, and 5 random initializations of this model. All of those algorithms were trained using the window methodology, starting with no window and finishing at a window size of 20. The accuracy results obtained are presented in Next, the Siamese network approach using different windows was tested, and it achieved the best accuracy results for a window size of 20 (82%). Although for smaller windows, it showed poorer results than some algorithms, it quickly outperformed them for larger windows. While the no window approach produced to many critical errors . Additionally, it is able to accurately distinguish between the red and green classes, with a total number of 37 misclassifications between them (22 red-green and 15 green-red errors). To the best of authors\u2019 knowledge, this is the first application of this methodology to the wood industry. The presented approach is highly adjustable, since in the case of changes in the samples , transfer learning can be used to retrain the previous model for a new application, without the need to start from scratch.To summarize, the presented solution achieved an overall accuracy and misclassification rate that fit into the initial acceptable ranges. With the simplified data collection methodology and low initial costs, it is readily applicable to the actual work environment, with very positive, initial feedback from the manufacturer. Furthermore, the Siamese network approach seems very promising, and while further research is still required, it is believed that additional improvement of both the accuracy and critical error rate is still possible."} +{"text": "Among metrological parameters, average temperature had the strongest correlation (rs = \u22120.675) with the cases. About 82% of Bangladeshi isolates had D614G at spike protein. Both temperature and UV index had strong effects on the frequency of mutations. Among host factors, coinfection is highly associated with frequency of different mutations. This study will give a complete picture of the effects of metrological parameters on COVID-19 cases, fatalities and mutation frequency that will help the authorities to take proper decisions.Coronavirus disease-2019 (COVID-19) has caused the recent pandemic worldwide. Research studies are focused on various factors affecting the pandemic to find effective vaccine or therapeutics against COVID-19. Environmental factors are the important regulators of COVID-19 pandemic. This study aims to determine the impact of weather on the COVID-19 cases, fatalities and frequency of mutations in Bangladesh. The impacts were determined on 1, 7 and 14 days of the case. The study was conducted based on Spearman's correlation coefficients. The highest correlation was found between population density and cases ( Coronaviridae) has triggered the coronavirus 2019 (COVID-19) pandemic worldwide [Severe acute respiratory syndrome coronavirus-2 (SARS-CoV-2) genome of ~30\u00a0000 bases , 2. The The main aim of this study is to analyse the correlation between metrological factors and frequency of mutations in SARS-CoV-2. The second objective of this study is to determine the relationship between host factors and SARS-CoV-2 mutation frequency. The third objective of this study is to investigate the correlation between environmental factors and COVID-19 pandemic in Bangladesh. This study will provide a better insight on the effects of environmental factors on COVID-19 pandemic in Bangladesh.2, Dhaka is the most populous capital in the world. The study period was from 07 March 2020 to 14 August 2020. On 07 March 2020, COVID-19 patients were detected for the first time in Bangladesh and considered as day 1 for the study.This study focused on the correlation of COVID-19 with metrological parameters in eight cities in Bangladesh. This study included Dhaka (23.71\u00b0N to 90.41\u00b0E), Chattogram (22\u00b020\u203206\u2033N to 91\u00b049\u203257\u2033E), Bogura (24\u00b051\u2032N to 89\u00b022\u2032E), Khulna (22\u00b049\u2032N to 89\u00b033\u2032E), Sylhet (24\u00b054\u2032N to 91\u00b052\u2032E) and Mymensingh (24\u00b045\u203214\u2033N to 90\u00b024\u203211\u2033E), Barishal (22\u00b048\u20320\u2033N to 90\u00b030\u20320\u2033E) and Rangpur (25\u00b044\u2032N to 89\u00b015\u2032E). With a population density of 46\u00a0997\u00a0person/kmhttps://covid19bd.idare.io/) and Institute of Epidemiology, Disease Control and Research (https://www.iedcr.gov.bd/website/) in Bangladesh, and cross-confirmed by analysing the data from official websites of WHO (www.who.int), Bing (www.bing.com/covid), Worldometers (www.worldometers.info/coronavirus/) and Johns Hopkins University (https://coronavirus.jhu.edu/). Environmental data including minimum temperature (\u2070C), average temperature (\u00b0C), maximum temperature (\u00b0C), UV index, wind speed (km/h), rain fall (mm), relative humidity (%) were collected from different databases including official website of Bangladesh Meteorological Department (http://live4.bmd.gov.bd/satelite/v/sat_infrared/), meteoblue (www.meteoblue.com), AccuWeather (www.accuweather.com) and WeatherOnline (www.weatheronline.co.uk) during this study. The whole genome of SARS-CoV-2 isolated from Bangladesh and reference genome sequences were collected from the official website of GISAID (https://www.gisaid.org/). Accession number of sequences are: EPI_ISL_437912, EPI_ISL_445213, EPI_ISL_445214, EPI_ISL_445215, EPI_ISL_445216, EPI_ISL_445217, EPI_ISL_445244, EPI_ISL_447590, EPI_ISL_447897, EPI_ISL_447899, EPI_ISL_447904, EPI_ISL_450339, EPI_ISL_450340, EPI_ISL_450340, EPI_ISL_450341, EPI_ISL_450342, EPI_ISL_450343, EPI_ISL_450344, EPI_ISL_450345, EPI_ISL_450839, EPI_ISL_450840, EPI_ISL_450841, EPI_ISL_450842, EPI_ISL_450843, EPI_ISL_455420, EPI_ISL_455458, EPI_ISL_455459, EPI_ISL_458133, EPI_ISL_462090, EPI_ISL_462091, EPI_ISL_462092, EPI_ISL_462093, EPI_ISL_462094, EPI_ISL_462095, EPI_ISL_462096, EPI_ISL_462097, EPI_ISL_462098, EPI_ISL_464159, EPI_ISL_464160, EPI_ISL_464161, EPI_ISL_464162, EPI_ISL_464163, EPI_ISL_464164, EPI_ISL_466626, EPI_ISL_466627, EPI_ISL_466628, EPI_ISL_466629, EPI_ISL_466630, EPI_ISL_466636, EPI_ISL_466637, EPI_ISL_466638, EPI_ISL_466639, EPI_ISL_466644, EPI_ISL_466645, EPI_ISL_466649, EPI_ISL_466650, EPI_ISL_466686, EPI_ISL_466687, EPI_ISL_466688, EPI_ISL_466689, EPI_ISL_466690, EPI_ISL_466691, EPI_ISL_466692, EPI_ISL_466693, EPI_ISL_466694, EPI_ISL_468070-EPI_ISL_468078, EPI_ISL_469285, EPI_ISL_469286, EPI_ISL_469297-EPI_ISL_469300, EPI_ISL_470801, EPI_ISL_475083, EPI_ISL_475084, EPI_ISL_475165-EPI_ISL_475173, EPI_ISL_475238, EPI_ISL_475570, EPI_ISL_475571.The data of COVID-19 cases and fatalities were collected from official websites of the Directorate General of Health Services (DGHS) (https://blast.ncbi.nlm.nih.gov/Blast.cgi). Multiple sequence alignment for the whole genome of Bangladeshi novel coronavirus strains and reference strains (Wuhan/WIV04/2019 and NC_045512/Wuhan-Hu-1) were conducted by using BioEdit 7.2.6 by using the ClustalW Multiple Alignment algorithm. Mutational analysis was performed for specific positions of novel coronavirus whole genome nucleotide sequences and peptide chains.The nucleotide sequences of the whole genome of novel coronaviruses were analysed using Chromas 2.6.5 . Sequence homology was determined by using the BLASTn program (rs) was determined between metrological parameters and COVID-19 cases and fatalities [rs) was determined among environmental factors, host factors and mutation frequency of novel coronaviruses. The association between two variables can be defined using a monotonic function by using Spearman's rank correlation coefficient (rs). The coefficient equation can be written asAll data were analysed using unbiased statistical approach. Spearman's rank correlation coefficient had been reported from Dhaka followed by Chattogram (n\u00a0=\u00a015\u00a0775) Bogura (n\u00a0=\u00a05614), Khulna (n\u00a0=\u00a05039), Sylhet (n\u00a0=\u00a04912) and Mymensingh (n\u00a0=\u00a03078), Barishal (n\u00a0=\u00a02806) and Rangpur (n\u00a0=\u00a01736), respectively , followed by Dhaka (76%), Rangpur (74%) and Khulna (73%), respectively . The higUV is an important metrological factor that affects the transmission and mutation frequency of novel coronaviruses. During this study, the average UV index was recorded between 6.5 and 8 in Bangladesh. The highest average UV index was recorded in Barishal and Bogura followedWind speed has direct effect on the spread of droplet nuclei containing virus particles. The average wind speed was recorded from 3\u00a0km/h to 19\u00a0km/h in this study. During this study, the highest average wind speed was recorded in Chattogram (17\u00a0km/h) followed by Bogura (16\u00a0km/h), Mymensingh (16\u00a0km/h), Khulna (15\u00a0km/h), Dhaka (14\u00a0km/h), Barishal (14\u00a0km/h), Rangpur (13\u00a0km/h) and Sylhet (11\u00a0km/h), respectively .Fig. 6.2) is the most populous city in the world with total population of 21\u00a0006\u00a0000, followed by Khulna (34\u00a0000\u00a0person/km2) with a total population of 1\u00a0122\u00a0000, Chattogram (19\u00a0800\u00a0person/km2) with a total population of 9\u00a0453\u00a0496, Sylhet (19\u00a0865\u00a0person/km2) with a total population of 526\u00a0412, Barishal (10\u00a0524\u00a0person/km2) with a total population of 385\u00a0093, Bogura (7763\u00a0person/km2) with a total population of 540\u00a0000, Mymensingh (5200\u00a0person/km2) with a total population of 476\u00a0543 and Rangpur (4167\u00a0person/km2) with a total population of 15\u00a0665\u00a0000, respectively.Dhaka (46\u00a0997\u00a0person/kmrs\u00a0=\u00a0\u22120.675), followed by average temperature 7 days ago (rs\u00a0=\u00a0\u22120.547), maximum temperature on the day (rs\u00a0=\u00a0\u22120.512) and minimum temperature on the day (rs\u00a0=\u00a0\u22120.486). Maximum temperature on the day had the highest correlation with total fatalities (rs\u00a0=\u00a0\u22120.611). The correlation for both COVID-19 cases and fatalities are negative, which indicates that at lower temperature the number of cases and fatalities increases.Spearman's correlation analysis between metrological and COVID-19 pandemic are presented in rs\u00a0=\u00a00.712) and fatalities (rs\u00a0=\u00a00.678) followed by the total population in every city.Second, relative humidity on the day had the greater correlation with the number of fatalities than case number. The correlation between relative humidity and COVID-19 pandemic reduces with increasing time span. Third, the association between UV index and the number of cases is the highest on the day of the case. Similarly, the correlation between UV index and the number of fatalities was the highest on the day. The correlation of relative humidity and UV index with COVID-19 pandemic was also negative. Fourth, among environmental factors, the average wind speed on the day had the highest correlation with the number of cases. The higher the wind speed was, the more the number of cases and fatalities were. Finally, the total population and population density of a city were highly correlated with the number of cases and fatalities in the city. The population density had the highest correlation with cases (rs\u00a0=\u00a00.611) with average temperature but frequency of rare mutation at spike protein had the highest correlation (rs\u00a0=\u00a00.658) with maximum temperature. Second, UV index was highly correlated with frequencies of all mutational events and the highest correlation was detected (rs\u00a0=\u00a00.678) with rare mutation at ORF1ab. Third, relative humidity had highest correlation (rs\u00a0=\u00a00.389) with frequency of D614G. Fourth, amount of rainfall was also strongly correlated with frequency of D614G. All of the metrological factors were positively related with the frequency of different mutations indicating increased temperature and UV index will favour origin of new mutations in the novel coronavirus. Finally, host factors-coinfection and gender variability were also positively correlated with mutation frequency. Coinfection had the highest association with common mutations at other structural proteins (rs\u00a0=\u00a00.671) . The ageAfter determining the first genome of the novel coronavirus, thousands of mutational events had occurred. Some of the mutations had made the virus more persistent, environmental resistant and more deadly. The first 100 genomes of the novel coronavirus were analysed in this study. Mutations were detected throughout the whole genome. The most common mutation at ORF1ab region were P323L (NSP12) (88%) and I120F (NSP2) (72%) . The mosAmong 100 patients of this study, males were the predominant gender group (63%) followed by female patients (37%). Most of the cases (24%) were detected in the age group 30\u221239 years followed by 19% in 20\u221229 years and 18% in 40\u221249 years, respectively .Fig. 8.rs\u00a0=\u00a0\u22120.486); between maximum temperature and COVID-19 cases (rs\u00a0=\u00a0\u22120.512); between average temperature and COVID-9 cases (rs\u00a0=\u00a0\u22120.675) [et al. [rs\u00a0=\u00a0\u22120.586)/ fatalities (rs\u00a0=\u00a0\u22120.609) [With a moderate transmission rate, COVID-19 pandemic has transmitted throughout the entire world within very short time and continued to infect people , 31, 32.\u00a0\u22120.675) , 34. Fou [et al. did not \u00a0\u22120.609) , 34. Whirs value of COVID-19 cases in China [Previous studies have reported significant correlation of coronaviruses infection with metrological parameters , 35, 36.in China . Furtherin China , 33, 34.in China , 39. Thiin China , 39.rs\u00a0=\u00a00.687) between total population and COVID-19 cases in Turkey. However, this study includes both the total population and population density for COVID-19 cases and fatalities. This study reported stronger correlation between total population and cases (rs\u00a0=\u00a00.645) /fatalities (rs\u00a0=\u00a00.578); between population density and cases (rs\u00a0=\u00a00.712) /fatalities (rs\u00a0=\u00a00.678) than previous study [\u015eahin detectedus study .Mutational events are most important aspects for novel coronavirus infection, transmission and persistence in the host cells. Mutations at spike proteins can create difficulties to develop effective vaccine or therapeutics against novel coronaviruses , 29, 30.rs\u00a0=\u00a00.654), rare mutation (rs\u00a0=\u00a00.598) and at S- D614G (rs\u00a0=\u00a00.611); between maximum temperature and rare mutation at S (rs\u00a0=\u00a00.658). Among other metrological factors, UV index had the highest correlation with frequency of mutations at every site in the genome. Among host factors, coinfections had strong correlation with frequency of mutations at ORF1ab- common mutation (rs\u00a0=\u00a00.485), rare mutation (rs\u00a0=\u00a00.642), at S-D614G (rs\u00a0=\u00a00.644) and common mutation at other protein (rs\u00a0=\u00a00.671). Both age and gender had significant correlation with frequency of mutations at many sites of the genome. As of 18 August 2020 no other study has described the correlation between metrological parameters/host factors and frequency of mutations in novel coronaviruses. However, for other viruses like influenza and Newcastle disease viruses there are reports of environmental and host factors association with the virus mutations [This study determined the correlation between metrological parameters and mutational frequency of novel coronaviruses for the first time. This study included both metrological parameters and host factors to determine correlation with frequency of mutations. Studies have found significant effects of environmental factors and UV radiation on the frequency of mutation of influenza virus and Newcastle disease virus , 43. Thi rs\u00a0=\u00a00.6, at S-D6This study has described the highest correlation between metrological parameters and COVID-19 cases; strong correlation between metrological parameters and COVID-19 fatalities; first time correlation between metrological factors and frequency of mutations in novel coronaviruses; first time correlation between host factors and frequency of mutation. This study also provided the mutation frequency at different sites of novel coronaviruses. A complete picture on the effect of metrological parameters on COVID-19 pandemic and novel coronavirus mutation frequency has been depicted in this study. This study will work as a baseline for the future studies focusing on the environmental and host factors affecting the COVID-19 pandemic. Further, this study will work as a guideline for future studies by containing important information on environmental and host factors impact on mutations of coronaviruses.Other factors like duration of lockdown, mobility of huge number of workers, social and religious gatherings, not using masks, movement of people during vacations, lack of proper detection of COVID-19 patients etc. are significantly affecting the pandemic. If direct contact is not avoided, social distance and personal hygiene are not maintained, environmental factors cannot control the COVID-19 pandemic alone. This study describes high frequency of common and rare mutations in the novel coronavirus genome in Bangladesh. Circulation of these mutants will certainly increase the duration of the pandemic and reduce the effectiveness of vaccine or therapeutics in future. The main limitation of this study is the variation of case number. The actual case and fatality number may vary slightly due to the lack of complete diagnosis of the population. In future, studies including more isolates of viruses, large number of clinical data and environmental data of longer periods can predict more accurate effects of various factors on COVID-19.To the best of our knowledge, this is the first study reporting correlation of environmental factors with COVID-19 pandemic in three time frames in Bangladesh with temperature about 27\u00a0\u00b0C. The strongest correlations between metrological factors and COVID-19 cases/fatalities were specified on the day of cases/fatalities. The highest correlation was detected between population density and cases followed by total population and cases indicating the mobility and crowd are actively increasing the cases and fatalities. For the first time, this study describes the effects of metrological parameters on the frequency of mutation at different sites in novel coronavirus. Both temperature and UV index had strong effects on different mutation events. Among host factors, coinfection also affected different mutations strongly. Including COVID-19 cases, fatalities, mutations, mutation frequency and clinical data this study provides a complete picture of the COVID-19 pandemics and environmental effects on this pandemic. This study will provide useful implications about COVID-19 pandemic and both for policy makers and public to take decision to reduce the health burden of this outbreak."} +{"text": "High-throughput RNA-seq enables comprehensive analysis of the transcriptome for various purposes. However, this technology generally generates massive amounts of sequencing reads with a shorter read length. Consequently, fast, accurate, and flexible tools are needed for assembling raw RNA-seq data into full-length transcripts and quantifying their expression levels. In this protocol, we report TransBorrow, a novel transcriptome assembly software specifically designed for short RNA-seq reads. TransBorrow is employed in conjunction with a splice-aware alignment tool (e.g. Hisat2 and Star) and some other transcriptome assembly tools . The protocol encompasses all necessary steps, starting from downloading and processing raw sequencing data to assembling the full-length transcripts and quantifying their expressed abundances. The execution time of the protocol may vary depending on the sizes of processed datasets and computational platforms. First, create a subdirectory \u201cStar\u201d in your directory and enter it. Next, employ the \u201cwget\u201d command to download the most recent package from the appropriate downloading link address. Subsequently, extract the contents of the downloaded package to obtain the executable file. Lastly, add the path of the Star binary to a directory encompassed within your system\u2019s PATH variable.$ mkdir Star$ cd Starhttps://github.com/alexdobin/STAR/archive/2.5.3a.tar.gz$ wget $ tar -xzf 2.5.3a.tar.gz$ cd STAR-2.5.3a$ echo \u201cexport PATH=$PATH:/mnt/data0/zhaody/Star/STAR-2.5.3a\u201d\u226b\u223c/.bashrc$ source \u223c/.bashrcAn illustration of installing Star is shown as follows.Samtools is a software package designed for processing SAM and BAM format files, which offers a diverse range of command-line utilities for manipulating, converting, and analyzing SAM/BAM files. It enables various tasks such as file format conversion (e.g. SAM to BAM conversion), sorting and indexing SAM/BAM files, assessing coverage and depth of aligned loci, extracting sequences from specific regions, and more .To install Samtools, the following steps can be undertaken. First, create a subdirectory named \u201cSamtools\u201d within your directory. Subsequently, navigate to the \u201cSamtools\u201d directory and use the \u201cwget\u201d command to download the latest package from the appropriate download link address. Afterwards, extract the downloaded package and configure the environment variables for Samtools accordingly. Finally, add the Samtools binary directory to the PATH environment variable in the shell configuration file (see the following illustration)An illustration of installing samtools is shown as follows.$ mkdir Samtools$ cd Samtoolshttps://nchc.dl.sourceforge.net/project/samtools/samtools/1.17/samtools-1.17.tar.bz2$ wget $ tar jxvf samtools-1.17.tar.bz2$ cd samtools-1.17$ ./configure\u2014prefix=/mnt/data0/zhaody/Samtools/samtools-1.17$ make$ make install$ echo \u201cexport PATH=$PATH:/mnt/data0/zhaody/Samtools/samtools-1.17\u201d\u226b\u223c/.bashrc$ source \u223c/.bashrcAfter installing it, you can type the command \u201csamtools\u201d to see the help documentation information.StringTie is an efficient transcriptome assembler, which iteratively extracts the heaviest path from a splice graph, and then estimates the abundance via a network flow algorithm [12].To install StringTie, the following steps can be followed. First, create a subdirectory named \u201cStringTie\u201d within your directory. Next, navigate to the \u201cStringTie\u201d directory and utilize the \u201cwget\u201d command to download the latest package from the appropriate download link address. After downloading, extract the contents of the package to obtain the executable file. Finally, add the path of the StringTie binary directory to the PATH environment variable in the shell configuration file.$ mkdir StringTie$ cd StringTiehttp://ccb.jhu.edu/software/stringtie/dl/stringtie-2.2.1.Linux_x86_64.tar.gz$ wget $ tar -xzvf stringtie-2.2.1.Linux_x86_64.tar.gz$ echo \u201cexport PATH=$PATH:/mnt/data0/zhaody/Stringtie/stringtie-2.2.1.Linux_x86_64\u201d\u226b\u223c/.bashrc$ source \u223c/.bashrcAn illustration of installing StringTie is shown as follows.If you have successfully installed it, you can type the command \u201cstringtie -h\u201d to see the help documentation information.Cufflinks is a software tool specifically designed for analyzing RNA-seq data. It constructs the overlap graph model based on the fragment alignments, and applies a minimum path cover model to search for the transcript-representing paths .To install Cufflinks, the following steps can be undertaken: First, create a subdirectory named \u201cCufflinks\u201d within your directory. Next, navigate to the \u201cCufflinks\u201d directory and use the command \u201cwget\u201d to download the latest package from the appropriate download link address. After downloading the package, extract its contents to obtain the executable file. Finally, add the Cufflinks binary directory to the PATH environment variable in the shell configuration file.$ mkdir Cufflinks$ cd Cufflinkshttp://cole-trapnell-lab.github.io/cufflinks/assets/downloads/cufflinks-2.2.1.Linux_x86_64.tar.gz$ wget $ tar -xzvf cufflinks-2.2.1.Linux_x86_64.tar.gz$ echo \u201cexport PATH=$PATH:/mnt/data0/zhaody/Cufflinks/cufflinks-2.2.1.Linux_x86_64\u201d\u226b\u223c/.bashrc$ source \u223c/.bashrcAn illustration of installing Cufflinks is shown as follows.If you have successfully installed it, you can type the command \u201ccufflinks -h\u201d to see the help documentation information.Scallop is a highly efficient transcriptome assembly software designed for the reconstruction of transcripts from RNA-Seq data. It was built upon the standard paradigm of the splice graph, and it decomposed the graphs through optimizing several competing objectives while preserving long-range phasing paths .To install Scallop, the following steps can be followed. First, create a subdirectory named \u201cScallop\u201d within the main directory. Next, navigate to the \u201cScallop\u201d directory and utilize the command \u201cwget\u201d to download the latest package from the appropriate download link address. After downloading, extract the contents of the package to obtain the executable file. Finally, add the Scallop binary directory to the PATH environment variable in the shell configuration file.$ mkdir Scallop$ cd Scallophttps://github.com/Kingsford-Group/scallop/releases/download/v0.10.5/scallop-0.10.5_linux_x86_64$ wget .tar.gz$ tar -xzvf scallop-0.10.5_linux_x86_64.tar.gz$ echo \u201cexport PATH=$PATH:/mnt/data0/zhaody/Scallop/scallop-0.10.5_linux_x86_64\u201d\u226b\u223c/.bashrc$ source \u223c/.bashrcAn illustration of installing Scallop is shown as follows.If you have successfully installed it, you can type the command \u201cscallop -h\u201d to see the help documentation information.boost directory and run \u201c./bootstrap.sh\u201d. Finally, type \u201c./b2 install\u2014prefix=\u201d to install Boost .The Boost library is a highly regarded, portable, and open-source C++ library that is essential for TransBorrow. To install Boost, follow these steps. First, create a subdirectory named \u201cBoost\u201d under your directory. Next, navigate into the \u201cBoost\u201d directory and use the command \u201cwget\u201d to download the latest package from the appropriate download link address. Then, unzip the downloaded package and change to the Please note that the version number and URL provided in your original text may need to be updated. Make sure to use the latest version of Boost and adjust the URL accordingly.$ mkdir Boost$ cd Boosthttps://boostorg.jfrog.io/artifactory/main/release/1.82.0/source/boost_1_82_0.tar.gz$ wget $ tar -xzvf boost_1_82_0.tar.gz$ cd boost_1_82_0$ ./bootstrap.sh$ ./b2 install\u2014prefix=/mnt/data0/zhaody/BoostAn illustration of installing Boost is shown as follows.If the Boost is installed successfully, you can see the \u201clib\u201d and \u201cinclude\u201d directories in YOUR_BOOST_INSTALL_DIRECTORY directory. Take note of the Boost installation directory, because you need to tell the TransBorrow installer where to find Boost later on.TransBorrow is an accurate and efficient transcriptome algorithm, which borrows the assemblies from different assemblers to search for reliable subsequences by building a colored graph from those borrowed assemblies, and employs a newly designed path extension strategy to accurately search for a transcript-representing path cover over each splicing graph .To install TransBorrow, create a subdirectory named TransBorrow under the main directory. After entering the directory, use the command \u201cwget\u201d to download a package from the appropriate download link address and unzip it. In addition, to configure the environment of TransBorrow, change to the \u201cbamtools\u201d directory and make a new directory named \u201cbuild\u201d, then type \u201ccmake\u201d and \u201cmake\u201d to make it install.$ mkdir TransBorrow$ cd TransBorrowhttps://sourceforge.net/projects/transcriptomeassembly/files/TransBorrow/TransBorrow_v.1.3.tar.gz$ wget $ tar -xzvf TransBorrow_v.1.3.tar.gz$ cd TransBorrow_v.1.3$ cd bamtools$ mkdir build$ cd build$ cmake ./$ make$ cd ./An illustration of building bamtools is shown as follows.Assuming the build process finished correctly, you should be able to find the toolkit executable in directory \u201c./bin/\u201d, the Bamtools API Utils libraries \u201c./lib/\u201d, and the Bamtools API headers \u201c./include/\u201d.lib and include directories (absolute path) of both Boost and bamtools to the CMakeLists.txt file located in TransBorrow_v.1.3/src/to configure the installation environment for TransBorrow (see the following illustration for details).Then, add the Please note that the directory structure and file names may vary depending on your specific setup. Adjust the paths and commands accordingly.$ cd src$ vim CMakeList.txtset(BOOST_LIB_DIR/mnt/data0/zhaody/Boost/lib)#setset(BOOST_INCLUDE_DIR/mnt/data0/zhaody/Boost/include)#setset(BAMTOOLS_LIB_DIR/mnt/data0/zhaody/TransBorrow/TransBorrow_v.1.3/bamtools/lib)#set(BAMTOOLS_LIB_DIR/storage/juntaosdu/yuting/bamtools/lib)set(BAMTOOLS_INCLUDE_DIR/mnt/data0/zhaody/TransBorrow/TransBorrow_v.1.3/bamtools/include)#set(BAMTOOLS_INCLUDE_DIR/storage/juntaosdu/yuting/bamtools/include)An illustration of setting the installation environment for TransBorrow is shown as follows.Change to the TransBorrow root directory and make a new directory named \u201cbuild\u201d and change into it, then type \u201ccmake ./src\u201d and \u201cmake\u201d commands for the final installation of TransBorrow.$ cd ../$ mkdir build$ cd build$ cmake ./src$ make$ cd ./src$ makeAn illustration of building TransBorrow is shown as follows.$ cd TransBorrow$ echo \u201cexport PATH=$PATH:/mnt/data0/zhaody/TransBorrow/TransBorrow_v.1.3/build\u201d\u226b\u223c/.bashrc$ source \u223c/.bashrcFinally, add the TransBorrow directory to the PATH environment variable in the shell configuration file. See the following illustration.If you have successfully installed it, you can type the command \u201cTransBorrow -h\u201d to see the help documentation information see .The Gffcompare software is a powerful tool utilized for comparing, merging, annotating, and estimating the accuracy of one or more GFF/GTF files in comparison to reference annotations . To inst$ mkdir Gffcompare$ cd Gffcomparehttp://ccb.jhu.edu/software/stringtie/dl/gffcompare-0.12.6.Linux_x86_64.tar.gz$ wget $ tar -xzvf gffcompare-0.12.6.Linux_x86_64.tar.gz$ echo \u201cexport PATH=$PATH:/mnt/data0/zhaody/Gffcompare/gffcompare-0.12.6.Linux_x86_6\u201d\u226b\u223c/.bashrc$ source \u223c/.bashrcAn illustration of installing Gffcompare is shown as follows.If you have successfully installed it, you can type the command \u201cgffcompare -h\u201d to see the help documentation information.TACO is a tool to reconstruct a consensus transcriptome from multiple RNA-seq data sets. TACO accepts as input a set of GTF files containing transcripts assembled from individual libraries, and it employs a dynamic programming path search strategy in the path graph to reconstruct the transcripts. To install TACO, the following steps should be followed. First, create a subdirectory named \u201cTACO\u201d within the main directory. Subsequently, navigate into the \u201cTACO\u201d folder and utilize the \u201cwget\u201d command to download the latest package from the appropriate download link address. Next, extract the downloaded package to acquire the executable file. Finally, add the TACO binary directory to the PATH environment variable.$ mkdir TACO$ cd TACOhttps://github.com/tacorna/taco/releases/download/v0.7.3/taco-v0.7.3.Linux_x86_64.tar.gz$ wget wget $ tar -xzvf taco-v0.7.3.Linux_x86_64.tar.gz$ echo \u201cexport PATH=$PATH:/mnt/data0/zhaody/TACO/taco-v0.7.3.Linux_x86_64\u201d\u226b\u223c/.bashrc$ source \u223c/.bashrcAn illustration of installing TACO is shown as follows.The RNA-seq data can be downloaded from databases such as the NCBI or the European Bioinformatics Institute (EBI), etc. Alternatively, the researchers can use the RNA-seq data sequenced by themselves. Note that the file format of the input sequencing data must be in FASTA or FASTQ format.RNA-seq technology is mainly categorized into single-end sequencing and paired-end sequencing, based on differences in DNA library preparation during the sequencing process. Single-end sequencing means sequencing only one end of the target DNA/RNA, which requires less sequencing time and cost. In comparison, paired-end sequencing refers to sequencing both ends of the target DNA/RNA, which improves the accuracy of mapping and assembly . AdditioTherefore, the proposed protocol utilized three human RNA-seq data sets based on different types of RNA-seq technology see to demonAn illustration of downloading sequencing data is shown as follows.$ cd Sratoolkit$ prefetch SRR7807492$ cd SRR7807492$ fastq-dump\u2014split-3 SRR7807492.sra$ prefetch ERR3639851$ cd ERR3639851$ fastq-dump\u2014split-3 ERR3639851.sra$ prefetch SRR10611964$ cd SRR10611964$ fastq-dump\u2014split-3 SRR10611964.sra$ mkdir ref_genome$ cd ref_genomehttp://hgdownload.soe.ucsc.edu/goldenPath/hg19/bigZips/chromFa.tar.gz$ wget $ gunzip *.gz$ cat *.fa > ref_genome.fa$ rm chr*.faAfter obtaining raw RNA-seq reads, download the human reference genome (the version used in this protocol was hg19) as follows. Create a file named \u201cref_genome\u201d in your home directory and utilize the \u201cwget\u201d command with the appropriate download link address to download the corresponding file. If you complete the downloading, extract the file (see the following illustration for details).The human genome is about 3GB and ensures that you have sufficient storage capacity to accommodate both the downloaded and uncompressed files.In this protocol, all the reference transcripts were set as the ground truth to evaluate the performance of the assemblers. To acquire the reference transcripts, navigate to the \u201cref_genome\u201d directory and employ the \u201cwget\u201d command along with the appropriate downloading link address (see the following illustration for details).$ cd ref_genomehttps://ftp.ebi.ac.uk/pub/databases/gencode/Gencode_human/release_44/GRCh37_mapping/gencode.v44lift37.annotation.gtf.gz$ wget $ gunzip gencode.v44lift37.annotation.gtf.gzAn illustration of downloading the reference transcriptome is shown below.Upon successful downloading, you will find three folders named SRR7807492, ERR3639851, and SRR10611964 within the Sratoolkit directory, which store the RNA-seq data in the \u201c.fastq\u201d format The \u201chisat2-build\u201d command generates the reference genome index and is typically followed by the reference genome file in \u201c.fa\u201d format and the desired output file name. The recommended command for this task is as follows.# hisat2-build ref_genome.fa ref_index_genome,ref_genome.fa is the reference genome.where Besides, when utilizing a transcriptome annotation library to aid in building the reference genome index, \u201chisat2-build\u201d requires the use of the \u201c\u2014ss\u201d parameter, which specifies the splice site information of the reference transcripts, and the \u201c\u2014exon\u201d parameter, which denotes the exon information of the reference transcripts. The running command should be as follows.# hisat2-build\u2014ss genome.ss\u2014exon genome.exon ref_genome.fa ref_index_genome,genome.ss and genome.exon can be extracted from a transcriptome annotation file (in GTF format) using the script \u201cextract_splice_sites.py\u201d and \u201cextract_exons.py\u201d provided in the Hisat2 package.where 2) The command \u201chisat2\u201d is utilized for aligning RNA-seq reads to the reference genome. The recommended command is as follows.# hisat2 -p 8\u2014dta -x ref_index_genome -1 reads_1.fastq -2 reads_2.fastq -S test_genome.sam where the parameter \u201c-p\u201d specifies the number of threads running, typically set to be equal to or slightly fewer than the number of available CPU cores. The parameter \u201c\u2014dta\u201d means reporting alignments tailored for transcript assemblers. The parameters \u201c-x\u201d and \u201c-S\u201d provide the indexed reference genome and the name of the output SAM file, respectively. The parameters \u201c-1\u201d and \u201c-2\u201d specify the two paired-end sequencing files in FASTQ format, reads_1.fastq and reads_2.fastq, respectively. If the sequencing data is single-ended, the parameter \u201c-U\u201d is used with the corresponding FASTQ file.The first step of using Hisat2 is to construct an index for the reference genome. Begin by navigating to the Hisat2 directory and employing the \u201cmv\u201d command to relocate the reference genome file from the \u201cref_genome\u201d directory to the current directory. Subsequently, utilize the \u201chisat2-build\u201d command to generate an index for the reference genome. This process will result in the creation of eight files with the \u201c.ht2\u201d suffix. This process of building the index typically takes approximately half an hour (see the following illustration for details).$ cd Hisat2$ mv/mnt/data0/zhaody/ref_genome/ref_genome.fa.$ hisat2-build ref_genome.fa ref_index_genomeAn illustration of the Hisat2 building index is shown below.It is noteworthy that incorporating the annotation information of the transcripts while building the genome index can improve the accuracy of Hisat2 alignment. Utilizing this approach is also an option as follows. Navigate to the Hisat2 directory and employ the \u201cmv\u201d command to relocate the reference genome file and the transcript annotation file from the \u201cref_genome\u201d directory to the current directory. Then, extract splice-site and exon information from the annotation. Proceed by using the \u201chisat2-build\u201d command to construct an index for the reference genome. Finally, eight files will be generated with the \u201c.ht2\u201d suffix. This process of building the index typically takes approximately one hour (see the following illustration for details).$ cd Hisat2$ mv/mnt/data0/zhaody/ref_genome/ref_genome.fa.$ mv/mnt/data0/zhaody/ref_genome/gencode.v44lift37.annotation.gtf.$ extract_splice_sites.py gencode.v44lift37.annotation.gtf >genome.ss$ extract_exons.py gencode.v44lift37.annotation.gtf >genome.exon$ hisat2-build\u2014ss genome.ss\u2014exon genome.exon ref_genome.fa ref_index_genomeAn illustration of the Hisat2 building index using the annotation file is shown below.To align the raw RNA-seq reads to the reference genome, first, navigate to the Hisat2 directory. Then, utilize the \u201cmv\u201d command to relocate the individual \u201c.fastq\u201d files of the three sample datasets from the \u201cSratoolkit\u201d directory to the current directory. Afterward, use the \u201chisat2\u201d command to align the RNA-seq reads from the three sample datasets to the reference genome. Upon completion of the alignment process, you will obtain three mapping files in \u201c.sam\u201d format, with each file corresponding to one of the sample datasets The command \u201cSTAR\u2014runMode genomeGenerate\u201d is utilized for constructing the reference genome index. The recommended command is as follows.# STAR\u2014runThreadN 8\u2014runMode genomeGenerate\u2014genomeDir ./index\u2014genomeFastaFiles ref_genome.fa\u2014sjdbGTFfile reference_Transcripts.gtf\u2014sjdbOverhang readlenth-1where the parameter \u201c\u2014runThreadN\u201d denotes the number of running threads, typically set to be equal to or slightly fewer than the number of CPU cores available. The parameter \u201c\u2014genomeDir\u201d specifies the output directory where the index file is stored. The parameters \u201c\u2014genomeFastaFiles\u201d and \u201c\u2014sjdbGTFfile\u201d indicate the reference genome and reference transcripts respectively. It is generally recommended to set the parameter \u201c\u2014sjdbOverhang\u201d to be the length of sequencing reads minus one.2) The command \u201cSTAR\u201d is employed for aligning RNA-seq reads to the reference genome. The recommended command is as follows:# STAR\u2014runThreadN 8\u2014genomeDir ./index\u2014readFilesIn reads_1.fastq reads_2.fastq\u2014outSAMtype BAM SortedByCoordinatewhere the parameter \u201c\u2014runThreadN\u201d indicates the number of threads running, typically set to be equal to or slightly fewer than the number of available CPU cores. The parameter \u201c\u2014genomeDir\u201d specifies the output directory where the index file is stored. The parameter \u201c\u2014readFilesIn\u201d provides the sequencing reads in FASTQ format, namely reads_1.fastq and reads_2.fastq. The parameter \u201c\u2014outSAMtype BAM SortedByCoordinate\u201d is used to generate a sorted BAM file as the final mapping output, eliminating the need for using Samtools to convert the SAM file to BAM. By default, the final mapping file is saved in the current folder.The first step of using Star is to construct an index for the reference genome. Beginning by entering the Star directory and employing the \u201cmv\u201d command to relocate the reference genome file and the transcript annotation file from the \u201cref_genome\u201d directory to the current directory. Subsequently, utilize the \u201cSTAR\u201d command to generate an index for the reference genome. (see the following illustration for details).$ cd Star$ mv/mnt/data0/zhaody/ref_genome/ref_genome.fa.$ mv/mnt/data0/zhaody/ref_genome/gencode.v44lift37.annotation.gtf.$ STAR\u2014runThreadN 8\u2014runMode genomeGenerate\u2014genomeDir ./index\u2014genomeFastaFiles ref_genome.fa\u2014sjdbGTFfile gencode.v44lift37.annotation.gtf\u2014sjdbOverhang 99An illustration of the Star building index is shown below.To align the raw RNA-seq reads to the reference genome with Star, please navigate to the STAR directory. Then, utilize the \u201cmv\u201d command to relocate the individual \u201c.fastq\u201d files of the three sample datasets from the \u201cSratoolkit\u201d directory to the current directory. Afterward, use the \u201cSTAR\u201d command to align the RNA-seq reads from the three sample datasets to the reference genome. Upon completion of the alignment process, you will obtain three mapping files in \u201cBAM\u201d format, with each file corresponding to one of the sample datasets. By default, the final mapping file is saved in the current folder.$ cd STAR$ mv/mnt/data0/zhaody/Sratoolkit/SRR7807492/SRR7807492_1.fastq.$ mv/mnt/data0/zhaody/Sratoolkit/SRR7807492/SRR7807492_2.fastq.$ mv/mnt/data0/zhaody/Sratoolkit/ERR3639851/ERR3639851.fastq.$ mv/mnt/data0/zhaody/Sratoolkit/SRR10611964/SRR10611964_1.fastq.$ mv/mnt/data0/zhaody/Sratoolkit/SRR10611964/SRR10611964_2.fastq.$ STAR\u2014runThreadN 8\u2014genomeDir ./index\u2014readFilesIn SRR7807492_1.fastq SRR7807492_2.fastq\u2014outSAMtype BAM SortedByCoordinate$ STAR\u2014runThreadN 8\u2014genomeDir ./index\u2014readFilesIn ERR3639851.fastq\u2014outSAMtype BAM SortedByCoordinate$ STAR\u2014runThreadN 8\u2014genomeDir ./index\u2014readFilesIn SRR10611964_1.fastq SRR10611964_2.fastq\u2014outSAMtype BAM SortedByCoordinateAn illustration of Star mapping is shown below.TransBorrow performs transcript assembly by utilizing the alignment of RNA-seq reads to the reference genome, as well as the assemblies generated from other assemblers . In this protocol, we utilize the read alignments generated by Hisat2 to illustrate the entire TransBorrow\u2019s transcript assembly process.# TransBorrow [options] -r -g -b -s It is worth noting that during the TransBorrow assembly process, specific parameters and commands need to be selected according to the data types and your needs to achieve optimal assembly results. The followings are some important parameter descriptions and suggestions for using TransBorrow to assemble transcripts.According to data types, you need to choose the appropriate parameter \u201c-s\u201d, e.g. for the data SRR7807492 that is paired-end and nonstranded, adjust the parameter to \u201c-s unstranded\u201d.The parameter \u201c-c\u201d indicates the minimum coverage of recovered transcripts, which helps to filter out potentially low-confidence transcripts.The parameter \u201c-l\u201d indicates the minimum length (bp) of recovered transcripts. This parameter helps to filter out short transcripts.The parameter \u201c-d\u201d refers to the minimum seed coverage in the path-extensions procedure of the TransBorrow algorithm. The default value for this parameter is typically set to 0. A higher value of min_seed for extension will result in more stringent filtering, potentially excluding low-coverage regions and leading to fewer but more confident assembled transcripts.To run StringTie, begin by navigating to the StringTie directory and moving the three mapping files in BAM format to the current directory. Then, execute the \u201cstringtie\u201d command to conduct the transcript assembly process, the assembled transcripts will be generated in GTF format (see the following illustration). Moreover, it is essential to adjust the parameters according to the provided data types. The recommended command using StringTie is as follows.# stringtie -p 8 -o OutputFile.gtf InputFile.bamIn this command, \u201c-p\u201d specifies the number of threads to be used, which is typically set to be equal to or slightly fewer than the number of available CPU cores. The \u201c-o\u201d parameter specifies the name of the output GTF file, and InputFile.bam indicates the name of the input BAM file. The resulting assembly GTF file will be saved in the current folder.$ cd Stringtie$ mv/mnt/data0/zhaody/Samtools/SRR7807492_genome.bam.$ mv/mnt/data0/zhaody/Samtools/ERR3639851_genome.bam.$ mv/mnt/data0/zhaody/Samtools/SRR10611964_genome.bam.$ stringtie -p 8 -o SRR7807492_genome_stringtie.gtf SRR7807492_genome.bam$ stringtie -p 8 -o ERR3639851_genome_stringtie.gtf ERR3639851_genome.bam$ stringtie -p 8 -o SRR10611964_genome_stringtie.gtf SRR10611964_genome.bamAn illustration of running StringTie with its default sets is shown below.To assemble the transcripts by Cufflinks, navigate to the Cufflinks directory and move the three mapping files in BAM format to the current directory. Subsequently, execute the \u201ccufflinks\u201d command to carry out the transcript assembly, which will generate three GTF files containing the assembled transcripts. It is imperative to adjust the commands accordingly for the provided data types (see the following illustration). The recommended command using Cufflinks is as follows.# cufflinks -p 8 -o OutputFile.gtf InputFile.bam\u2014library-type fr-firststrandIn this command, the \u201c-p\u201d parameter specifies the number of threads to be used, typically matching or slightly lower than the available CPU cores. The \u201c-o\u201d parameter is followed by the desired name for the output GTF file and the name of the input BAM file. The \u201c\u2014library-type fr-firststrand\u201d parameter indicates that the input RNA-seq data were generated using the first-strand cDNA synthesis method. Conversely, the \u201c\u2014library-type rf-secondstrand\u201d parameter would indicate that the input RNA-seq data was generated using the second-strand cDNA synthesis method. The default assumption for input RNA-seq data is nonstranded.$ cd Cufflinks$ mv/mnt/data0/zhaody/Stringtie/SRR7807492_genome.bam.$ mv/mnt/data0/zhaody/Stringtie/ERR3639851_genome.bam.$ mv/mnt/data0/zhaody/Stringtie/SRR10611964_genome.bam.$ cufflinks -p 8 -o SRR7807492_genome_cufflinks.gtf SRR7807492_genome.bam$ cufflinks -p 8 -o ERR3639851_genome_cufflinks.gtf ERR3639851_genome.bam$ cufflinks -p 8 -o SRR10611964_genome_cufflinks.gtf SRR10611964_genome.bam\u2014library-type fr-firststrandAn illustration of running Cufflinks with its default sets is shown below.To assemble the transcripts by Scallop, begin by navigating to the Scallop directory and moving the three mapping files in BAM format to the current directory. Next, execute the \u201cscallop\u201d command to perform transcript assembly, and the assembled transcripts will be generated in GTF format. It is important to note that Scallop requires different parameters to be configured based on the specific data type of the input. Therefore, it is necessary to adjust the commands accordingly for the provided data types. The recommended command using Scallop is as follows.# scallop -i InputFile.bam -o OutputFile.gtf\u2014library-type firstIn this command, the \u201c-i\u201d parameter is followed by the name of the input BAM file, and the \u201c-o\u201d parameter is followed by the desired name for the output GTF file. The \u201c\u2014library-type first\u201d parameter indicates that the input RNA-seq data were generated using the first-strand cDNA synthesis method. Conversely, the \u201c\u2014library-type second\u201d parameter would indicate that the input RNA-seq data were generated using the second-strand cDNA synthesis method. Additionally, the \u201c\u2014library-type unstranded\u201d parameter indicates that the input RNA-seq data are nonstranded. By default, the final assembly GTF file will be saved in the current directory.$ cd Scallop$ mv/mnt/data0/zhaody/Cufflinks/SRR7807492_genome.bam.$ mv/mnt/data0/zhaody/Cufflinks/ERR3639851_genome.bam.$ mv/mnt/data0/zhaody/Cufflinks/SRR10611964_genome.bam.$ scallop -i SRR7807492_genome.bam -o SRR7807492_genome_scallop.gtf\u2014library_type unstranded$ scallop -i ERR3639851_genome.bam -o ERR3639851_genome_scallop.gtf\u2014library_type unstranded$ scallop -i SRR10611964_genome.bam -o SRR10611964_genome_scallop.gtf\u2014library_type firstAn illustration of running Scallop with its default sets is shown below.Besides the alignment file, TransBorrow needs to take as input the transcripts assembled by different transcript assembly tools, which should be merged together first. Create a new directory named \u201cGtf_file\u201d in the home directory to store the merged files. Next, move the nine GTF files to the current directory. Finally, utilize the cat command to merge the three GTF files corresponding to each sample into one file (see the following illustration).$ mkdir Gtf_file$ cd Gtf_file$ mv/mnt/data0/zhaody/Stringtie/SRR7807492_genome_stringtie.gtf.$ mv/mnt/data0/zhaody/Cufflinks/SRR7807492_genome_cufflinks.gtf.$ mv/mnt/data0/zhaody/Scallop/SRR7807492_genome_scallop.gtf.$ cat SRR7807492_genome_stringtie.gtf SRR7807492_genome_cufflinks.gtfSRR7807492_genome_scallop.gtf > combine_SRR7807492.gtfAn illustration of merging the assembled transcripts of different assemblers is shown as follows.To utilize TransBorrow, navigate to the TransBorrow directory and move the mapping BAM files, reference genome FASTA file, and the merged transcripts for each sample to the current directory. Employ the \u201cTransBorrow\u201d command to perform the transcript assembly procedure (see the following illustration for details). It is important to consider that different parameters need to be configured for different types of sample data.The recommended command using TransBorrow is as follows.# TransBorrow -r CombinedInputFile.gtf -b InputFile.bam -g ref_genome.fa -s first -o OutputDirctoryIn this command, the \u201c-r\u201d parameter is followed by the combined transcripts assembled by different tools in GTF format. The \u201c-b\u201d parameter is followed by the name of the input BAM file. The \u201c-g\u201d parameter is followed by the reference genome in FASTA format. The parameter \u201c-s\u201d indicates the type of sequencing data, whether it is stranded or nonstranded, single-ended or paired-ended. In this case, \u201cfirst\u201d indicates that the sequencing data were generated using the first-strand cDNA synthesis method. The \u201c-o\u201d parameter is followed by a directory that stores the temporary files and the final assembled GTF file. Please refer to $ cd TransBorrow$ mv/mnt/data0/zhaody/Gtf_film/combine_SRR7807492.gtf.$ mv/mnt/data0/zhaody/Gtf_film/combine_ERR3639851.gtf.$ mv/mnt/data0/zhaody/Gtf_film/combine_SRR10611964.gtf.$ mv/mnt/data0/zhaody/Scallop/SRR7807492_genome.bam.$ mv/mnt/data0/zhaody/Scallop/ERR3639851_genome.bam.$ mv/mnt/data0/zhaody/Scallop/SRR10611964_genome.bam.$ mv/mnt/data0/zhaody/Hisat2/ref_index_genome.$ TransBorrow -r combine_SRR7807492.gtf -b SRR7807492_genome.bam -g ref_genome.fa -s unstranded -o ./TransBorrow_results/SRR7807492_TransBorrow.gtf -n 3 -t 8$ TransBorrow -r combine_ERR3639851.gtf -b ERR3639851_genome.bam -g ref_genome.fa -s single_unstranded -o ./TransBorrow_results/ERR3639851_TransBorrow.gtf -n 3 -t 8$ TransBorrow -r combine_SRR10611964.gtf -b SRR10611964_genome.bam -g ref_genome.fa -s first -o ./TransBorrow_results/SRR10611964_TransBorrow.gtf -n 3 -t 8An illustration of running TransBorrow is shown as follows.Finally, the results are saved in the directory \u201c./TransBorrow_results/\u201d .To merge the transcripts by TACO, navigate to the TACO directory and move the three assembled GTF files generated by different assemblers to the current directory. Then, add the absolute paths of all GTF files to a TXT file as the input file for TACO. Subsequently, execute the \u201ctaco_run\u201d command to carry out the transcript merging, which will generate a GTF file containing the merged transcripts. The recommended command using TACO is as follows.# taco_run gtf_files.txt\u2014filter-min-expr 0.001 -o TACO_OutDirctory -p 8In this command, the \u201ctaco_run\u201d command specifies the input TXT file. The \u201c-p\u201d parameter specifies the number of threads to be used, typically matching or slightly lower than the available CPU cores. The \u201c-o\u201d parameter is followed by the desired name for the output GTF file. The \u201c\u2014filter-min-expr\u201d parameter is used to filter redundant transcripts to retain only those with expression levels above the specified threshold, and it is recommended here that a setting of 0.001 will provide more flexibility in capturing transcripts with low expression levels.$ cd TACO$ mv/mnt/data0/zhaody/Gtf_file/SRR7807492_genome_stringtie.gtf.$ mv/mnt/data0/zhaody/Gtf_file/SRR7807492_genome_cufflinks.gtf.$ mv/mnt/data0/zhaody/Gtf_file/SRR7807492_genome_scallop.gtf.$ ls -R/mnt/data0/zhaody/TACO/SRR7807492_genome.* > SRR7807492_genome_merge.txt$ ls -R/mnt/data0/zhaody/TACO/ERR3639851_genome.* > ERR3639851_genome_merge.txt$ ls -R/mnt/data0/zhaody/TACO/SRR10611964_genome.* > SRR10611964_genome_merge.txt$ taco_run SRR7807492_genome_merge.txt\u2014filter-min-expr 0.001 -o ./SRR7807492_genome_merge.gtf$ taco_run ERR3639851_genome_merge.txt\u2014filter-min-expr 0.001 -o ./ERR3639851_genome_merge.gtf $ taco_run SRR10611964_genome_merge.txt\u2014filter-min-expr 0.001 -o ./SRR10611964_genome_merge.gtfAn illustration of running TACO with its default sets is shown below.To merge the transcripts by StringTie-merge, navigate to the StringTie directory and move the three assemblies\u2019 GTF file generated by different assemblers to the current directory. Subsequently, execute the \u201cstringtie-merge\u201d command to carry out the transcript merging, which will generate a GTF file containing the merged transcripts. The recommended command using StringTie\u2014merge is as follows.# stringtie\u2014merge stringtie.gtf scallop.gtf cufflinks.gtf -T 0.001 -F 0.001 -o StringTie_merge.gtfIn this command, the \u201c\u2014merge\u201d parameter is followed by three input GTf files. The \u201c-o\u201d parameter is followed by the desired name for the output GTF file. The parameter \u201c-T\u201d and \u201c-F\u201d indicates the minimum TPM and FPKM values of transcripts to be included in the merged results, and a setting of 0.001 is recommended here to allow for more transcripts with lower expression to be merged.$ cd Stringtie$ mv/mnt/data0/zhaody/Gtf_file/SRR7807492_genome_stringtie.gtf.$ mv/mnt/data0/zhaody/Gtf_file/SRR7807492_genome_cufflinks.gtf.$ mv/mnt/data0/zhaody/Gtf_file/SRR7807492_genome_scallop.gtf.$ stringtie\u2014merge SRR7807492_genome_stringtie.gtf SRR7807492_genome_cufflinks.gtf SRR7807492_genome_scallop.gtf -T 0.001 -F 0.001 -o SRR7807492_StringTie_merge.gtf$ stringtie\u2014merge ERR3639851_genome_stringtie.gtf ERR3639851_genome_cufflinks.gtf ERR3639851_genome_scallop.gtf -T 0.001 -F 0.001 -o ERR3639851_StringTie_merge.gtf$ stringtie\u2014merge SRR10611964_genome_stringtie.gtf SRR10611964_genome_cufflinks.gtf SRR10611964_genome_scallop.gtf -T 0.001 -F 0.001 -o SRR10611964_StringTie_merge.gtfAn illustration of running StringTie-merge with its default sets is shown below.In this protocol, we used three RNA-seq data to evaluate the performance of the assembly tools. Two of the latest mapping software, Hisat2 and Star , were emF-score, where the precision is defined as the percentage of the correctly assembled transcripts out of the candidates and recall is defined as the fraction of the correctly recovered transcripts in the ground truth, and F-score is a harmonic mean of recall and precision ) [To evaluate the performance of an assembler, we used the following metrics: the number of correctly assembled transcripts, precision, recall, and recall)) , 22.First, we compared TransBorrow to the alternatives using the data sets with different sequencing library layouts (i.e. single- or paired-end reads), and the data SRR7807492 (paired-end) and ERR3639851 (single-end) were selected as the test data. After comparison, TransBorrow exhibited superior performance in terms of the four aforementioned metrics when compared to other assemblers and SRR10611964 (strand specific). We also compared TransBorrow to other assemblers in terms of the four aforementioned metrics. Once again, TransBorrow exhibited superior performance when compared to other assemblers (see F-score, TransBorrow also reached the highest. These results provide further evidence that TransBorrow outperformed the other compared assemblers in assembling both strand-specific and nonstranded RNA-seq data.In detail, for the correctly assembled transcripts on the nonstranded RNA-seq data SRR7807492 under Hisat2 and Star mappings, TransBorrow demonstrated significant improvement as mentioned above. For the correctly assembled transcripts on the strand-specific RNA-seq data SRR10611964 under the two aligners, TransBorrow detected 11.53% and 8.80% more expressed transcripts than StringTie, 11.08% and 10.83% more than Scallop, 68.04% and 59.81% more than Cufflinks, 22.55% and 20.06% more than StringTie-merge, and 24.78% and 19.06% more than Taco. And, for the precision and As mentioned, various RNA-seq alignment tools were available for the researchers, and Hisat2 and Star are the two most widely used mappers. In order to evaluate the performance of the assemblers under different alignment tools, we used the two different aligners to generate the mapping results of the RNA-seq data. Then we compared TransBorrow to the others using the three data sets SRR7807492, ERR3639851, and SRR10611964, under both Hisat2 and Star mappings. As compared above, TransBorrow consistently demonstrated the best performance in terms of all the aforementioned accuracy metrics regardless of the alignment tools (see Developing accurate and efficient transcript assembly software is crucial for effectively processing large-scale RNA sequencing data. It enables further exploration of the transcriptome and investigations into complex human diseases, such as cancers, which are often associated with aberrant splicing events and expression levels , 24. In TransBorrow incorporates several advantages that contribute to its superior performance. First, it effectively borrows the assembly results of other software to guide its own assembly process, which combines the strengths of multiple tools. Second, TransBorrow integrates the assemblies of other assembles to build a novel graph model, namely color graph, which facilitates efficient extraction of reliable subpaths. Third, TransBorrow constructs the weighted line graph based on the splice graph and the extracted reliable subpaths. Fourth, TransBorrow introduces a new strategy for identifying transcript-representing paths from the weighted line graphs.de novo assemblies, and the relatively complex installation process. In the future, we will definitely tackle these problems.Although we have seen great advantages taken by TransBorrow, the current version of TransBorrow does have several limitations, such as the inability to assemble long reads, incompatible with In conclusion, TransBorrow stands out among existing software by accurately assembling complex raw sequencing data into expressed transcripts. By utilizing TransBorrow, researchers can effectively harness the power of RNA-seq for advanced transcriptome analysis."} +{"text": "An early gastric cancer was found on the gastric body in an 85-year-old man. Subsequently, an endoscopic submucosal dissection (ESD) was performed . AlthouVideo\u20061\u2002Successful clip closure for a delayed perforation after gastric endoscopic submucosal dissection.Delayed perforation after gastric ESD is an extremely rare complication and is often managed surgicallyEndoscopy_UCTN_Code_TTT_1AO_2AG"} +{"text": "Electroacupuncture (EA) is a beneficial physiotherapy approach for addressing neuropsychiatric disorders. Nevertheless, the impact of EA on the gut microbiome in relation to anxiety disorders remains poorly understood.To address this gap, we conducted a study using a chronic restraint stress (CRS) mouse model to investigate the anti-anxiety outcome of EA and its influence on gut microbiota. Our research involved behavioral tests and comprehensive sequencing of full-length 16S rRNA microbiomes.Our findings revealed that CRS led to significant anxiety-like behaviors and an imbalance in the gut microbiota. Specifically, we identified 13 species that exhibited changes associated with anxiety-like behaviors. Furthermore, EA partially alleviated both behaviors related to anxiety and the dysbiosis induced by CRS.In summary, this study sheds light on the alterations in gut microbiota species resulting from CRS treatment and brings new light into the connection between EA\u2019s anti-anxiety effects and the gut microbiota. They affect approximately 12.1% of the global population . These dElectroacupuncture (EA) is a modernized adaptation of traditional Chinese medicine\u2019s acupuncture, a technique known for its significant impact on the nervous, endocrine, and immune systems . EA sharThe gut-brain axis, representing the bidirectional connection between the gut microbiota and the central nervous system (CNS), is a crucial player in regulating emotions and stress. It operates through various mechanisms, including metabolic, neural, hormonal, and immune-mediated pathways . A substIn this context, our current study aimed to investigate how EA treatment impacts the gut microbiome in a murine model of CRS. This is a well-established and widely accepted stress-related anxiety model that reliably induces anxiety-like behavior . Simulta2.2.1.Male C57BL/6 mice, aged 8\u2009weeks and with a weight between 18 and 22 g, were supplied by the Animal Center of Air Force Military Medical University. These mice were group-housed in cages, with each cage accommodating four mice. They had unrestricted access to food and water and were kept in a controlled environment at a temperature of 20\u201325\u00b0C. The cages had wire bottoms, and the mice followed a 12-h light/dark cycle, with lights on from 8:00\u2009a.m. to 8:00\u2009p.m. The research procedures conducted in this study received approval from the Ethics Committee of Xi\u2019an Gaoxin Hospital under the reference number 2023-GXKY-0011. All experiments were conducted in accordance with the guidelines provided in the National Institutes of Health Guide for the Care and Use of Laboratory Animals.2.2.n\u2009=\u20098) and CRS (n\u2009=\u200916). In the CRS group, mice were exposed to CRS for 2\u2009h per day over a continuous 14-day period. This involved placing the mice in conical tubes (50\u2009mL) equipped with airflow holes. In contrast, the Control group mice were transferred from their original cage to an experimental room, delicately managed for 5\u2009min, and then transported back to their holding place 2\u2009h later. This procedure was repeated over the course of 14\u2009days for the EA procedures. Consistent with previous protocols , mice re2.4.The open field chamber used in this study was constructed from white polycarbonate and measured 50\u2009cm\u2009\u00d7\u200950\u2009cm in size. Each mouse was positioned in the middle of the chamber, and their behavior was monitored and recorded for a duration of 5\u2009min. We utilized an overhead video-tracking and analysis system acquired from Top Scan, Clever Sys Inc. (United States) for this purpose. Specifically, we measured the time that mice spent within the central area, which was a 25\u2009cm\u2009\u00d7\u200925\u2009cm square at the center of the chamber, as well as the whole distance they covered during the observation period.2.5.The maze apparatus used in our study, supplied by Dig Behav, Ji Liang Co. Ltd. (China) was comprised of two open arms (each measuring 35\u2009cm\u2009\u00d7\u20096\u2009cm) and two enclosed arms (each measuring 35\u2009cm\u2009\u00d7\u20096\u2009cm), elevated 50\u2009cm over the floor. During the testing phase, mice were initially positioned in the central square of the maze, oriented toward one of the open arms. Their behavior was then observed and recorded for 5\u2009min. We quantified the number of entries into the open arms (entry count) and the duration of placement within the open arms, employing the same monitoring system as used in the OFT. All tests occurred in low light conditions, and the test area was thoroughly sanitized with 30% ethanol after each trial.2.6.http://rdp.cme.msu.edu/. Following the OTU assignment and taxonomy analysis, subsequent analyses including Alpha Diversity Analysis, Linear Discriminant Analysis (LDA), Principal Coordinate Analysis (PCoA), and correlation analysis were performed using the Majorbio cloud platform, which is provided by Majorbio Bio-Pharm Technology Co., Ltd.Each mouse was placed in a metabolic cage between 7:00\u2009a.m. and 11:00\u2009a.m., and fecal samples were collected in sterile cryotubes, and immediately frozen in liquid nitrogen before further analysis. Undefecated mice promote defecation by lifting their tails, ensuring that each mouse collects at least one fecal sample. Genomic DNA was then obtained from these fecal samples using the E.Z.N.A. Stool DNA Kit, manufactured by Omega Bio-Tek, United States . For the2.7.p\u2009<\u20090.05 was deemed statistically significant.Statistical analyses were performed with the GraphPad v.8.0 or SPSS 21.0 software . We first assessed the normal distribution of continuous data using the Shapiro\u2013Wilk test. If the data met the criteria for normal distribution or variance homogeneity, we conducted unpaired t-tests or one-way analysis of variance (ANOVA), and subsequent Bonferroni post-hoc tests for pairwise comparisons. Alternatively, if the data did not meet these criteria, nonparametric tests such as the Wilcoxon rank-sum test or Kruskal-Wallis test were applied. To examine correlations between behaviors and gut microbiota at the species level, we utilized Spearman\u2019s rank correlation coefficient. All significance tests were two-tailed, and a significance level of 3.3.1.t\u2009=\u20090.355, df\u2009=\u200922, p\u2009=\u20090.726; t\u2009=\u20090.443, df\u2009=\u200922, p\u2009=\u20090.662; t\u2009=\u20094.535, df\u2009=\u200922, p < 0.001; t\u2009=\u20092.321, df\u2009=\u200922, p\u2009=\u20090.03; t\u2009=\u20092.654, df\u2009=\u200922, p\u2009=\u20090.015; p\u2009=\u20090.04, r2 =\u20090.258, p =\u20090.001; r2 =\u20090.281, p =\u20090.001; r2 =\u20090.124, p =\u20090.001; No significant difference was observed in the whole distance traveled in the OFT analysis, as illustrated in Muribaculaceae, Streptococcaceae and Burkholderiaceae; genus Duncaniella, unclassified_f_Muribaculaceae, Limosilactobacillus, Mammaliicoccus, Lactococcus, Ralstonia, unclassified_f_Eggerthellaceae and Streptococcus; species Ralstonia_pickettii, Streptococcus_danieliae, unclassified_g_Lactococcus, Limosilactobacillus_reuteri, Mammaliicoccus_sciuri, Odoribacter_laneus, unclassified_f_Muribaculaceae, Duncaniella_freteri, Bacteroides_caecimuris and unclassified_f_Eggerthellaceae were enriched in the Control group. Whereas phylum Candidatus_Melainabacteria; family Bacteroidaceae, Prevotellaceae, unclassified_o_Bacteroidales, unclassified_o_Vampirovibrionales, Clostridiaceae, Eubacteriaceae, unclassified_o_Rhodospirillales and Desulfovibrionaceae; genus unclassified_f_Prevotellaceae, Eubacterium, Muribaculum, Bacteroides, Vampirovibrio, unclassified_o_Bacteroidales, Oscillibacter, Phocea, Sporobacter, Acetatifactor, Phocaeicola, Mailhella, unclassified_o_Rhodospirillales, Anaerotignum, Harryflintia and Clostridium; species Harryflintia_acetispora, Bacteroides_acidifaciens, Bacteroides_uniformis, Phocaeicola_sartorii, Muribaculum_intestinale, unclassified_f_Prevotellaceae, Alistipes_finegoldii, unclassified_o_Bacteroidales, Vampirovibrio_chlorellavorus, unclassified_g_Clostridium, Eubacterium_coprostanoligenes, Acetatifactor_muris, Anaerotruncus_colihominis, Anaerotruncus_rubiinfantis, Oscillibacter_valericigenes, unclassified_g_Oscillibacter, Phocea_massiliensis, Sporobacter_termitidis, unclassified_o_Rhodospirillales, Mailhella_massiliensis and Helicobacter_bilis were enriched in the CRS group.We found distinctions in taxonomic composition between the Control and CRS groups using both LDA with a threshold of LDA\u2009\u2265\u20093 and a significance level of unclassified_o_Bacteroidales, unclassified_g_Christensenella, Odoribacter_splanchnicus, Acutalibacter_muris, Bacteroides_uniformis, Oscillibacter_valericigenes and Acetatifactor_muris; but had a positive correlation with the abundance of Lactobacillus_gasseri, Limosilactobacillus_reuteri, Akkermansia_muciniphila and Ligilactobacillus_murinus. The No. of entries into open arms had a negative correlation with the abundance of Bacteroides_acidifaciens and Eubacterium_coprostanoligenes. The time spent in center had a negative correlation with the abundance of unclassified_g_Anaerotruncus, Phocaeicola_sartorii, unclassified_f_Prevotellaceae, Mailhella_massiliensis, Vampirovibrio_chlorellavorus, Bacteroides_acidifaciens, Bacteroides_uniformis and Acetatifactor_muris; whereas it had a positive correlation with the abundance of unclassified_f_Muribaculaceae, Duncaniella_freteri, Bacteroides_caecimuris and Lactobacillus_gasseri . Notably, the application of EA (CRS\u2009+\u2009EA) partially mitigated the anxiety-like behavior induced by CRS in mice, as indicated by an increase in the time the animals stayed in the center area of the OFT and the open arms of the EPMT when compared to the CRS\u2009+\u2009fEA group (p\u2009<\u20090.05).As depicted in 3.4.r2\u2009=\u20090.131, p\u2009=\u20090.012; r2\u2009=\u20090.112, p\u2009=\u20090.159; r2\u2009=\u20090.108, p\u2009=\u20090.117; Lactobacillaceae, genus Escherichia and Olsenella, species unclassified_g_Lactobacillus, unclassified_g_Olsenella, Escherichia_fergusonii and Lactobacillus_gasseri. Meanwhile, phylum Proteobacteria; family Kiloniellaceae; genus Butyribacter and Aestuariispira; species Parabacteroides_gordonii, Aestuariispira_insulae, Parabacteroides_goldsteinii, Butyribacter_intestini, Bacteroides_uniformis, Eubacterium_coprostanoligenes and Bacteroides_acidifaciens were enriched in the CRS\u2009+\u2009fEA group. Phylum Candidatus_Melainabacteria; family Prevotellaceae and unclassified_o_Vampirovibrionales; genus Faecalimonas, Vampirovibrio and Lachnoclostridium; species Faecalimonas_umbilicata, Vampirovibrio_chlorellavorus and unclassified_g_Lachnoclostridium were enriched in CRS\u2009+\u2009EA group , separation anxiety, selective mutism, specific phobias, social anxiety disorder (SAD), panic disorder, and agoraphobia . The linive mice . Given tEA is an alternative therapeutic approach that combines traditional acupuncture techniques with electrotherapy and is increasingly recognized for its potential in treating neuropsychiatric disorders. A growing body of research has highlighted the beneficial effects of EA, particularly when applied at the \u201cBai hui\u201d (GV20) acupoint. Studies have shown that GV20-based EA can regulate basic fibroblast growth factor (FGF2) in the rat hippocampus , enhanceParasutterella and Bacteroides while decreasing the relative abundances of Dialister, Hungatella, Megasphaera, Barnesiella, Allisonella, Intestinimon and Moryella at the genus level in the treatment of Parkinson\u2019s disease has been well-declared , This in disease . Preclin disease . HoweverOur present study delved into the composition of the gut microbiome in mice subjected to CRS modeling after EA treatment. Similar to other neuromodulation therapies , EA had Lactobacillus_gasseri was more abundant in the Sham group and it was positively correlated with the number of entries into open arms. Conversely, Bacteroides_uniformis was enriched in the CRS\u2009+\u2009fEA group and negatively correlated with the number of entries into open arms and the time the subjects remained in the center. Lactobacillus_gasseri is a probiotic (Bacteroides_uniformis has been associated with the adverse effects of a depressive microbiome on behavior (Eubacterium_coprostanoligenes is a species that has a lipolytic function (Furthermore, our analysis revealed that robiotic whereas behavior . These ffunction . Its abu5.In summary, our findings highlight that CRS leads to pronounced anxiety-like behaviors and disturbances in gut microbiota composition, and these effects can be partially mitigated through EA treatment. We further investigated the specific microbial species associated with anxiety-like behaviors at the species level, identifying 13 species linked to the anxiety-like responses induced by CRS. However, it remains to be explored how longer-duration and varying parameters of EA may impact gut microbiota composition.The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author.The animal study was approved by the research procedures conducted in this study received approval from the Ethics Committee of Xi\u2019an Gaoxin Hospital under the reference number 2023-GXKY-0011. The study was conducted in accordance with the local legislation and institutional requirements.JB: Conceptualization, Investigation, Writing \u2013 original draft. J-QW: Conceptualization, Data curation, Writing \u2013 original draft. QT: Investigation, Writing \u2013 original draft. FX: Formal analysis, Funding acquisition, Writing \u2013 original draft. WZ: Methodology, Writing \u2013 original draft. HH: Conceptualization, Writing \u2013 review & editing."} +{"text": "The metabolic neighborhood of a compound can be defined from a metabolic network and correspond to metabolites to which it is connected through biochemical reactions. With the proposed approach, we suggest more than 35,000 associations between 1,047 overlooked metabolites and 3,288 diseases (or disease families). All these newly inferred associations are freely available on the FORUM ftp server (see information at https://github.com/eMetaboHUB/Forum-LiteraturePropagation).In human health research, metabolic signatures extracted from metabolomics data have a strong added value for stratifying patients and identifying biomarkers. Nevertheless, one of the main challenges is to interpret and relate these lists of discriminant metabolites to pathological mechanisms. This task requires experts to combine their knowledge with information extracted from databases and the scientific literature. However, we show that most compounds (>99%) in the PubChem database lack annotated literature. This dearth of available information can have a direct impact on the interpretation of metabolic signatures, which is often restricted to a subset of significant metabolites. To suggest potential pathological phenotypes related to overlooked metabolites that lack annotated literature, we extend the \u201cguilt-by-association\u201d principle to literature information by using a Bayesian framework. The underlying assumption is that the literature associated with the metabolic neighbors of a compound can provide valuable insights, or an Key Points:Most metabolites have little or no information available in the literature.We propose an original method leveraging information contained in the literature from metabolic neighbors.We provide more than 35000 suggested relations between overlooked metabolites and disease-related concepts.the Matthew effect = P. Under this assumption, for any contributor i, the prior distribution of ip is modeled as a Beta distribution parameterized by mean (\u03bc = P) and sample size (\u03bd):The probability portions . We assuP, a relationship would not be suggested a priori, and the higher \u03bd, the more each contributor i would have to bring new evidence (in) to change this prior belief , is much more sensitive to outlier contributors than LogOdds [LogOdds should be considered a measure of significance and Log2FC as a measure of effect size. Finally, LogOdds and Log2FC can also be computed independently for each contributor i using their associated component in the prior , only the distribution priorf is used to compute LogOdds and Log2FC. In summary, for metabolites without literature, LogOdds and Log2FC are derived from priorf, while for metabolites with literature, they are obtained from postf. For the latter, priorLogOdds and priorLog2FC are computed from the prior distribution priorf and aim to represent the belief of the metabolic neighborhood, without the influence of the compound\u2019s literature.For metabolites mentioned in few articles and with literature available in the neighborhood (2), the behavior of the method is exactly as described above. When the compound Beta(\u03b1(0), \u03b2(0)), and then the posterior distribution is Beta(\u03b1(0), \u03b2(0)) is used, but predictions are automatically discarded.There may be no literature available in the neighborhood of some metabolites. In this case, the prior distribution is simply defined by Since the construction of the prior from the neighborhood\u2019s literature is critical in the proposed method, several diagnostic values are also reported to judge its consistency. Those additional indicators are detailed in Project name: Forum-LiteraturePropagationhttps://github.com/eMetaboHUB/Forum-LiteraturePropagationProject homepage: Operating system(s): Platform independentProgramming language: Python, bash scriptOther requirements: Python 3.7, Pip, CondaLicense: CeCILL 2.1RRID: SCR_023874giad065_GIGA-D-23-00014_Original_SubmissionClick here for additional data file.giad065_GIGA-D-23-00014_Revision_1Click here for additional data file.giad065_GIGA-D-23-00014_Revision_2Click here for additional data file.giad065_Response_to_Reviewer_Comments_Original_SubmissionClick here for additional data file.giad065_Response_to_Reviewer_Comments_Revision_1Click here for additional data file.giad065_Reviewer_1_Report_Original_SubmissionBiswapriya Misra -- 2/25/2023 ReviewedClick here for additional data file.giad065_Reviewer_2_Report_Original_SubmissionTara Eicher, M.S. -- 3/3/2023 ReviewedClick here for additional data file.giad065_Reviewer_2_Report_Revision_1Tara Eicher, M.S. -- 6/22/2023 ReviewedClick here for additional data file.giad065_Reviewer_3_Report_Original_SubmissionBrian DeFelice -- 3/15/2023 ReviewedClick here for additional data file.giad065_Reviewer_3_Report_Revision_1Brian DeFelice -- 6/22/2023 ReviewedClick here for additional data file.giad065_Supplemental_FileClick here for additional data file."} +{"text": "This novel device appears to be a promising new method for weight reduction that is fast, feasible, and safe . RandomEndoscopy_UCTN_Code_TTT_1AO_2AN"} +{"text": "To identify sets of genes that exhibit similar expression characteristics, co-expression networks were constructed from transcriptome datasets that were obtained from plant samples at various stages of growth and development or treated with diverse biotic, abiotic, and other environmental stresses. In addition, co-expression network analysis can provide deeper insights into gene regulation when combined with transcriptomics. The coordination and integration of all these complex networks to deduce gene regulation are major challenges for plant biologists. Python and R have emerged as major tools for managing complex scientific data over the past decade. In this study, we describe a reproducible protocol POTFUL (pant co-expression transcription factor regulators), implemented in Python 3, for integrating co-expression and transcription factor target protein networks to infer gene regulation. Drosophila melanogaster, the collective profile of gene expressions in each cell type or tissue does not remain static, since genes are continuously regulating each other ][\u2018GMT\u2019]))# GMT_base/POTFUL-Uncut.gmt# GMT_base/POTFUL-3hpc.gmtf.fig = POT.Plots[Samples[0]][\u2018WGCNA_BarPlot\u2019]fig.show# fig = POT.Plots[Samples[1]][\u2018WGCNA_BarPlot\u2019]fig.show# Using the following Python script, a bar chart of the numbers of the genes in each WGCNA module for each dataset was created a,b:fig Note: The \u201cfig\u201d is a Plotly figure object that can be further modified accordingly to export a publication quality image, as described below:fig.update_layout\u2019,\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0font=dict)fig.update_xaxesfig.update_yaxesfig.write_imagefig.write_imageg.p-value was calculated (hypergeometric test), indicating whether the overlap between the two module gene lists is significant. As the background parameter, the nodes of both co-expression networks that were being compared were used. For assigning significance color codes and significance asterisks, only \u2018Adjusted p-value\u2019 is considered by default. An enrichment analysis of modules was performed of one sample concerning another sample using the following command:POT.WGCNA_Module_EnrichmentUsing Fisher\u2019s exact test, the Note: The results of the module enrichment analysis can be accessed as a Python (Pandas) dataframe using the following command:print(POT.Data[\u201cEnrichment_Dotplot\u201d])h.fig = POT.Plots[\u201cEnrichment_Dotplot\u201d]fig.update_layout)fig.write_imagefig.write_image(POT.OutDir+f\u201d3hpc__UncutEnri_dot.svg\u201d)# Using the following Python command, the enrichment dot plot was generated, and a high-quality image was exported. Every dot in the enrichment dot represents the significance of the enrichment, i.e., green (***), gold (**), and yellow (*). In contrast, the plus (+) symbol represents not significantly enhanced sets.Note: Each WGCNA module of samples on the y-axis (uncut) was compared to samples on the x-axis (3hpc). The order of samples in the \u201cWGCNA_Module_Enrichment\u201d function was changed to do the comparison in the other direction, i.e., (uncut vs. 3hpc) ). Additionally, the \u201cWGCNA_Module_Enrichment\u201d function only accepts two samples.Duration: 5 mina.# UncutPOT.TF_reg# 3hpcPOT.TF_reg(Samples[1])The TF\u2013target pairs that did not belong to the known curated TF\u2013target pairs were filtered out using the following Python command for each sample:Note: We could choose whether to do this step or not. We included this choice to help deal with large numbers of TF\u2013target pairs created by prediction tools like GENI3. The purpose of removing some pairs is to make the analysis smoother, especially when there are many of pairs to go through.b.# UncutPOT.merge_reg_coexp(Samples[0])# 3hpcPOT.merge_reg_coexp(Samples[1])Using the following Python command, the remaining GRN-weighted network was matched with the co-expression network to keep only those pairs that are co-expressed and involved in regulation:Note: The network of node pairs that are co-expressed and are TF\u2013target pairs is called the co-expressed\u2013GRNs.c.# UncutPOT.network_centrality(Samples[0])# 3hpcPOT.network_centrality(Samples[1])Network centrality analysis was performed on the co-expressed\u2013GRN using the following command # 3hpcPOT.generate_graphml_out(Samples[1])The GraphML file was generated, and the network visualized using the following command (see e.# UncutPOT.Graph_vis(Samples[0])POT.Plots[Samples[0]][\u2018Network_Viz\u2019] .show(POT.OutDir+\u2019Uncut.html\u2019)# # 3hpcPOT.Graph_vis(Samples[1])POT.Plots[Samples[1]][\u2018Network_Viz\u2019] .show(POT.OutDir+\u20193hpc.html\u2019)# The CERN was plotted and exported using the following command:f.POT.netowork_overlap# There are 20 nodes overlapping between pair of GraphsPOT.Plots[\u2018Overlap_Network_Viz\u2019].show(\u2018Overlap.html\u2019)# The co-expressed\u2013GRNs of both samples were compared and plotted to check for any overlapping nodes, using the following command:"} +{"text": "Endoscopic resection of duodenal gastrointestinal stromal tumors (GISTs) is challenging with non-negligible complicationsA 59-year-old man was referred for a 2-cm muscle-origin tumor in the duodenal bulb ; the diVideo\u20061\u2002Endoscopic suturing can rescue the defect of endoscopic full-thickness resection for a duodenal gastrointestinal stromal tumor.Endoscopy_UCTN_Code_TTT_1AO_2AC"} +{"text": "Peroral endoscopic myotomy (POEM) is an increasingly adopted strategy for the treatment of Zenker\u2019s diverticulum . The Z-Video\u20061\u2002Peroral endoscopic myotomy for Zenker\u2019s diverticulum without tunneling.After the initial mucosal incision, both submucosal sides of the septum are lifted, with a mixture of hydroxyethyl starch and indigo carmine. Then we proceed to direct myotomy of the septum . The diEndoscopy_UCTN_Code_TTT_1AO_2AGCitation Format10.1055/a-2127-7402.Endoscopy 2023; 55: E946\u2013E948. doi:"} +{"text": "The authors demonstrate that scRNA-seq sample pooling followed by genetics-based separation of individuals is an effective means to identify individual samples in a variety of commonly studied species. Single-cell sequencing (sc-seq) provides a species agnostic tool to study cellular processes. However, these technologies are expensive and require sufficient cell quantities and biological replicates to avoid artifactual results. An option to address these problems is pooling cells from multiple individuals into one sc-seq library. In humans, genotype-based computational separation of pooled sc-seq samples is common. This approach would be instrumental for studying non-isogenic model organisms. We set out to determine whether genotype-based demultiplexing could be more broadly applied among species ranging from zebrafish to non-human primates. Using such non-isogenic species, we benchmark genotype-based demultiplexing of pooled sc-seq datasets against various ground truths. We demonstrate that genotype-based demultiplexing of pooled sc-seq samples can be used with confidence in several non-isogenic model organisms and uncover limitations of this method. Importantly, the only genomic resource required for this approach is sc-seq data and a de novo transcriptome. The incorporation of pooling into sc-seq study designs will decrease cost while simultaneously increasing the reproducibility and experimental options in non-isogenic model organisms. Over the last decade, single-cell RNA sequencing (scRNA-seq) has exploded in popularity as a species agnostic tool for studying gene expression at the level of individual cells . The bigIn fields working with low cell numbers, like developmental biology, pooling of samples from multiple animals with no sample labeling method, or intention of demultiplexing has become a standard practice. This approach lacks advantages of true replicates because there is no way to assess the data for representation of all samples or variation between samples. The inability to demux pooled samples thus lacks the ability to account for replicate variation and perform replicate strengthened differential expression analysis . BecauseMethods for analyzing pooled data and for enabling the demultiplexing of pooled scRNA-seq samples are varied in concept and accuracy and have been recently reviewed . To presIn contrast to these experimental demultiplexing approaches, computational methods have been developed to demultiplex pooled human samples without any labeling regimen using the natural genetic differences between individuals. These approaches detect genetic differences between samples at sites of single-nucleotide polymorphisms (SNPs) and implement demultiplexing based on differential distributions of these SNPs between samples. SNP-based approaches have been benchmarked and shown to be highly effective at separating human samples . RelativIn this project, we set out to learn whether SNP-based demultiplexers work in an array of non-human species. We benchmarked available SNP-based demultiplexing programs and found that most are highly accurate on model organism datasets. We then selected the demultiplexing tool with the broadest potential usability, souporcell , to moreWe first explored the performance of SNP-based demultiplexing methods when applied to a published zebrafish dataset. Two useful resources for applying SNP-based demultiplexing are available for zebrafish, a high-quality genome , and a chttps://github.com/statgen/popscle; After processing each zebrafish scRNA-seq sample individually, we performed in silico pooling of three samples . This syComparison of compute requirements for SNP demuxers.Table S1. To further investigate the demultiplexing accuracy of souporcell, SNP-based cell assignments were assessed for correlations with ground truth animal origin. We found a strong agreement between souporcell assignments and ground truth animal identities . We thenThe presence of doublets in single-cell RNA sequencing is a major confounder. A doublet is a droplet represented by a single-cell barcode that contains more than one cell. In these in silico pools, true doublets can be identified with absolute certainty because we have origin information. When comparing SNP-based demultiplexing results to ground truth, \u201cconfirmed doublets\u201d are cells that were assigned doublets by both the ground truth and demuxer. Furthermore, \u201ccontested doublet\u201d refers to cells in which the experimentally derived ground truth and SNP-based demuxer result disagree about a potential doublet. We thus investigated the doublet detection capacity of souporcell for heterotypic true doublets . Homotypic true doublets created during synthetic pooling were removed, as souporcell relies on intergenotypic doublet detection. We found that souporcell missed almost half of the synthetic heterotypic true doublets in the pooled dataset . The relWe next decided to in silico benchmark these SNP demultiplexers on another potentially inbred population, the African green monkey. The African green monkey is a pre-clinically relevant species, with a published genome and SNP Finally, we also assessed results of SNP-based demultiplexing of synthetically pooled single-nuclei data from axolotl. The axolotl is an example of the type of organism for which scRNA-seq has enabled cell level study of regeneration and immunology for the first time . A genom6 total SNPs and indels . In comparison to the inbred mouse strains, the SNP density of the successfully demultiplexed datasets was higher at 0.34/kb (axolotl), 0.86/kb (African green monkey), and 3.57/kb (zebrafish). Although an exact quantitative analysis of this possible genetic cutoff would be useful, these results imply that a range of 0.2\u20130.34 SNPs/kb may be the minimum required within a sc-seq dataset for SNP-based demultiplexing.Xenopus laevis scRNA-seq data containing eight experimentally pooled samples from three Xenopus transgenic lines that each overexpress a different fluorescent gene Xenopus ly genome .Xenopus transgenic line of origin, the low number of fluorescent gene counts left many cells without sufficient data to make an assignment prediction but no available common SNPs VCF file. We first set out to assess souporcell demux assignments on pooled splenocytes from three transgenic Pleurodeles newts, which express different fluorescent proteins under the same ubiquitous promoter (CAG) . We desier (CAG) .Xenopus analysis, we selected only cells that had sufficient read depth and fluorescent gene detection for benchmarking . The heterogeneity of sample representation in different cell clusters highlights the critical need for demultiplexing of pooled scRNA-seq data (As performed in the hmarking . We founhmarking . The fluhmarking and S8H.hmarking . Furtherseq data . WithoutPleurodeles scRNA-seq data (Xenopus data). Similar to the Xenopus dataset, we attribute these fluorescence-based \u201cnegative\u201d assignments to the low capture of fluorescent gene reads . It is more likely that these large pool experiments occur in non-human samples. Thus, to aid future validations, we provide examples and modified memory-efficient scripts to pool samples and determine accuracy which will aid laboratories\u2019 working in any species to conduct their own benchmarking. In addition, this memory-efficient script allows for pooling without a VCF file, which will be critical for all researchers interested in benchmarking SNP demuxers in organisms without VCF files available. Once upper limits become established, another option to increase throughput would be layered multiplexing, for example, labeling 10 individuals with a CMO, another 10 with a second CMO, and so on. This could be paired with a second SNP-based demultiplexing step and could substantially expand sample throughput.It will also be important to define the upper limit for the number of distinct pooled samples followed by SNP-based demuxing for each organism. In line with this, we obtained assignments when performing SNP-based demultiplexing on a pool of 30 zebrafish samples. However, without a benchmarking assessment from a ground truth derived from a distinct technology, it is uncertain if these results can be trusted. We therefore propose that using SNP-based demultiplexers on large pools needs to be further validated. This can be performed in silico as more single-replicate scRNA-seq datasets become published. Until then, the developers of souporcell indicate that 21 pooled human samples can be demultiplexed and speculate that this could work in up to 40 . Multiple independent programs designed specifically for doublet detection using transcriptomic instead of genotypic information are available for scRNA-seq data including DoubletDetection , we modified the default souporcell pipeline to enable remapping of reads studies will greatly expand single-cell\u2013based discoveries. This will facilitate work in well-known and lesser studied species by lowering the financial and technical hurdles of producing adequately powered single-cell experiments. We predict that both species agnostic and cross-species comparative studies are going to be increasingly fruitful in uncovering biological insights and the application of SNP-based demultiplexing with minimal genomic resources is critical for future research.P. waltl and N. viridescens at Karolinska Institutet and were performed according to local and European ethical permits. N. viridescens and P. waltl were raised in-house. All animals were maintained under standard conditions of 12-h light/12-h darkness at 18\u201324\u00b0C , 20 ml of Yokuchi Bitamin Multivitamin and 10 ml of calcium supplementation into 100 Liters of water). For Pleurodeles and Notophthalmus in the Cellplex experiment, animals were housed in the water as described but modified to only have sea salt, Ektozon, and calcium solution.All experiments were carried out in post-metamorphic 18\u201324\u00b0C . Before P. waltl and processed as individual samples in parallel. All animals were post-metamorphic newts from established transgenic lines close to sexual maturity: one female tgTol2(CAG:Nucbow CAG:Cytbow)Simon were 85%, 98.4%, and 91.6% in eBFP, eGFP, and mCherry animals, respectively. FACS was used to sort for fluorescent-positive cells (Spleens were harvested from three separate length) , one mal11.1 cm) , and fem10.8 cm) . Forcepsve cells . The sam5 cells of GFP+, 4 \u00d7 105 of mCherry+, and 3.15 \u00d7 105 BFP+ were isolated in individual 1.5 ml Eppendorf tubes. GFP and mCherry expression were high, but BFP expression was dim. 500 \u03bcl of each solution was then added to an individual 1.5 ml Eppendorf tube for the fluorescent pool sample were excN. viridescens (4.45 g and 10.6 cm snout-to-tail) and one male N. viridescens (3.55 g and 10.2 cm) were collected, pooled into one tube, and then processed as described in \u201cfluorescence-pooling experiment\u201d excluding FACS cell sorting. The only modification to the above described processing is that cells were kept at room temperature throughout and that cells were resuspended in 0.7X PBS with 0.04% ultrapure BSA.Spleens from one female P. waltl, spleens were removed from animals as described in \u201cfluorescent-pooling experiment\u201d from one adult tgSceI(CAG:loxP-GFP-loxP-Cherry)Simon female (23.5 g and 16.1 cm snout-to-tail length), one male tgSceI(CAG:loxP-GFP-loxP-Cherry)Simon (13.95 g and 15.7 cm) animal. Pleurodeles were processed as individual samples. After the spleen was thoroughly mashed through the pre-wetted 70-\u03bcm nylon filter and the filter being washed with 10 ml of 0.7X PBS, the cells were centrifuged at 300g for 5 min. Splenocytes were resuspended cells in 1 ml of sterile filtered 1X ACK (http://cshprotocols.cshlp.org/content/2014/11/pdb.rec083295.short) to lyse red blood cells. After one minute of lysis, cells were diluted with 10 ml of 0.7X PBS and filtered through a 70-\u03bcm nylon mesh filter and centrifuged at 300g for 5 min. Cells were then resuspended in 0.7X PBS with 0.04% ultrapure BSA.For Notophthalmus sample and the individual Pleurodeles samples were then taken through the 10x Genomics 3\u2032 Cellplex\u2013labeling protocol the only modifications being the use of 0.7X PBS + 0.04% BSA for all wash and resuspension steps. Samples were stained with CM304 , CMO305 , and CMO306 . Samples were manually counted and pooled at equal ratios immediately before loading onto the 10x Genomics Chromium Controller targeting 9,000 cells in total.The pooled Chromium single-cell 3\u2032 kit v3 (10x Genomics) was used according to the manufacturer\u2019s instructions.P. waltl de novo transcriptome from the study of https://figshare.com/articles/dataset/Trinity_Pwal_v2_fasta_gz/7106033/1 and unzipped. The Trinity were downloaded from SRA using prefetch followed by fasterq-dump with flags split-files and include-technical. Files were renamed to 10x FASTQ format and then aligned using Cell Ranger v7.0.0 count to GRCz11 with the corresponding gtf for GRCz11 filtered via Cell Ranger mkgtf for protein_coding genes.FASTQ files from a previously published study (SRA acchttps://research.nhgri.nih.gov/manuscripts/Burgess/zebrafish/downloads/NHGRI-1/danRer11/danRer11Tracks/NHGRI1.danRer11.variant.vcf.gz and https://research.nhgri.nih.gov/manuscripts/Burgess/zebrafish/downloads/NHGRI-1/danRer11/danRer11Tracks/NHGRI1.danRer11.variant.vcf.gz.tbi (Preprint).To merge bams in silico, a VCF file and tbi index were downloaded from f.gz.tbi and subsf.gz.tbi Preprintexec Demuxafy.sif bcftools filter --include \u201cMAF \u2265 0.05\u201d -Oz --output NHGRI1.maf0.05.danRer11.variant.vcf.gz NHGRI1.danRer11.variant.vcf.gzsingularity exec Demuxafy.sif bcftools sort -Oz NHGRI1.maf0.05.danRer11.variant.vcf.gz -o sorted.NHGRI1.maf0.05.danRer11.variant.vcf.gzsingularity The chromosomes between the vcf and gtf did not match so bcftools annotate --rename-chrs was used to change chromosome names in the VCF using a tab-separated file named chr.conv.txt with the format: chr1 1, chr2 2, etc.exec Demuxafy.sif bcftools annotate --rename-chrs chr.conv.txt sorted.NHGRI1.maf0.05.danRer11.variant.vcf.gz | singularity exec Demuxafy.sif bgzip > rename.sorted.NHGRI1.maf0.05.danRer11.variant.vcf.gzsingularity The sample-specific bam outputs were then merged using Vireo\u2019s synth_pool.py script as follows:python synth_pool.py -s sample1_genome_bam.bam,sample2_genome_bam.bam,sample3_possorted_genome_bam.bam -b sample1/outs/filtered_feature_bc_matrix/barcodes.tsv,sample2/outs/filtered_feature_bc_matrix/barcodes.tsv,sample3/outs/filtered_feature_bc_matrix/barcodes.tsv -d 0.1 -o three_mixed_zf -p 1 -r NHGRI1.maf0.05.danRer11.variant.vcf.gz --randomSEED 50 --nCELL 500.Souporcell was run using souporcell_pipeline.py with inputs: merged BAM output from synth_pool.py, the output barcodes_pool.tsv from synth_pool.py, the genome fasta (Danio_rerio.GRCz11.dna.primary_assembly.fa), N = 3, and vcf file NHGRI1.maf0.05.danRer11.variant.vcf.gz.exec Demuxafy.sif souporcell_pipeline.py -i pooled.bam -b barcodes_pool.tsv -f Danio_rerio.GRCz11.dna.primary_assembly.fa -t 20 -o output -k 3 --common_variants NHGRI1.maf0.05.danRer11.variant.vcf.singularity We found that Freemuxlet failed when using the downloaded VCF file but was successful when inputting the VCF and minimap.bam generated via souporcell when running souporcell without a VCF. This implies that Freemuxlet may be able to be fed a sample-derived VCF . An example of a VCF that worked:exec Demuxafy.sif freebayes -f Danio_rerio.GRCz11.dna.primary_assembly.fa -iXu -C 2 -q 20 -n 3 -E 1 -m 30 -g 100,000 souporcell_minimap_tagged_sorted.bam > zf.freebayes.vcfsingularity exec Demuxafy.sif popscle dsc-pileup --sam pooled.bam--group-list barcodes_pool.tsv --vcf zf.freebayes.vcf --out $FREEMUXLET_OUTDIR/pileupsingularity exec Demuxafy.sif popscle freemuxlet --plp $FREEMUXLET_OUTDIR/pileup --out $FREEMUXLET_OUTDIR/freemuxlet --group-list barcodes_pool.tsv --nsample 3singularity exec Demuxafy.sif bash Freemuxlet_summary.sh $FREEMUXLET_OUTDIR/freemuxlet.clust1.samples.gz > $FREEMUXLET_OUTDIR/freemuxlet_summary.tsvsingularity Then Freemuxlet:\u201c{print $1}\u201d chromosomes.txt | paste -s -d, > chr.list.vireo.txtsingularity exec Demuxafy.sif samtools idxstats pooled.bam > chromosomes.txtawk exec Demuxafy.sif cellsnp-lite -s pooled.bam -b barcodes_pool.tsv -O $OUT_DIR -p 20 --chrom \u201c$(<${DemuxSoupDir}chr.list.vireo.txt)\u201d --minMAF 0.1 --minCOUNT 100 --gzipsingularity exec${DemuxSoupDir}Demuxafy.sif vireo -c $OUT_DIR -o $OUT_DIR -N $N.singularity Vireo:exec Demuxafy.sif samtools view -b -S -q 10 -F 3844 pooled.bam > $SCSPLIT_OUTDIR/filtered_bam.bamsingularity exec Demuxafy.sif samtools rmdupsingularity $SCSPLIT_OUTDIR/filtered_bam.bam$SCSPLIT_OUTDIR/filtered_bam_dedup.bamexec Demuxafy.sif samtools sort -osingularity $SCSPLIT_OUTDIR/filtered_bam_dedup_sorted.bam$SCSPLIT_OUTDIR/filtered_bam_dedup.bamexec Demuxafy.sif samtools indexsingularity $SCSPLIT_OUTDIR/filtered_bam_dedup_sorted.bamexec Demuxafy.sif freebayes -fsingularity Danio_rerio.GRCz11.dna.primary_assembly.fa -iXu -C 2 -q 1$SCSPLIT_OUTDIR/filtered_bam_dedup_sorted.bam >$SCSPLIT_OUTDIR/freebayes_var.vcfexec Demuxafy.sif vcftools --gzvcfsingularity $SCSPLIT_OUTDIR/freebayes_var.vcf --minQ 30 --recode --recode-INFO-all --out $SCSPLIT_OUTDIR/freebayes_var_qual30exec Demuxafy.sif scSplit count -csingularity NHGRI1.maf0.05.danRer11.variant.vcf -v$SCSPLIT_OUTDIR/freebayes_var_qual30.recode.vcf -i$SCSPLIT_OUTDIR/filtered_bam_dedup_sorted.bam -b barcodes_pool.tsv -r$SCSPLIT_OUTDIR/ref_filtered.csv -a $SCSPLIT_OUTDIR/alt_filtered.csv -o $SCSPLIT_OUTDIRexec Demuxafy.sif scSplit run -rsingularity $SCSPLIT_OUTDIR/ref_filtered.csv -a $SCSPLIT_OUTDIR/alt_filtered.csv -n 3 -o $SCSPLIT_OUTDIRexec Demuxafy.sif scSplit genotype -rsingularity $SCSPLIT_OUTDIR/ref_filtered.csv -a $SCSPLIT_OUTDIR/alt_filtered.csv -p $SCSPLIT_OUTDIR/scSplit_P_s_c.csv -o $SCSPLIT_OUTDIRexec Demuxafy.sif bash scSplit_summary.shsingularity $SCSPLIT_OUTDIR/scSplit_result.csv.scSplit:SRR12067711\u2013SRR12067712) using prefetch followed by fasterq-dump with flags split-files and include-technical. Files were renamed to 10x FASTQ format and then aligned using Cell Ranger v7.0.0 count to GRCz11 with the corresponding gtf for GRCz11 filtered via Cell Ranger mkgtf for protein_coding genes. The output possorted_genome_bam.bam and filtered barcodes.tsv file were then used to run souporcell:A previously published dataset of 30 pooled zebrafish embryos was downexec Demuxafy.sif souporcell_pipeline.py -i possorted_genome_bam.bam -b barcodes.tsv -f Danio_rerio.GRCz11.dna.primary_assembly.fa -t $THREADS -o $SOUPORCELL_OUTDIR -k 30 --common_variants NHGRI1.maf0.05.danRer11.variant.vcfsingularity https://github.com/RegenImm-Lab/SNPdemuxPaper.The clusters.tsv file output from souporcell was then used to evaluate cluster distribution of cells. Code for this is the basis for https://sra-pub-src-2.s3.amazonaws.com/SRR13600554/107606_Xen_Pool_BL7_10_14dpa.bam.1) from a previously published publicly available dataset , three blastemas of three siblings , and three samples from three siblings were download directly. SAMtools (https://ftp.ncbi.nlm.nih.gov/geo/samples/GSM5057nnn/GSM5057661/suppl/GSM5057661_107606_Xen_Pool_BL7_10_14dpa_barcodes.tsv.gz), X. laevis genome FASTA (https://sra-pub-src-2.s3.amazonaws.com/SRR13600553/Xenbase_v9.2.fa.1), and N = 8. Note: this specific reference includes the plasmid sequences necessary for mapping to fluorescent sequences.The Cell Ranger BAM file ( dataset of fluorSAMtools was usedE-MTAB-11662) labeled \u201creseq\u201d and from samples D_1, L_1, and M_1, three individual animals all run on individual wells on a 10x Genomics chip were downloaded. To make a Cell Ranger reference, the axolotl genome (AmexG_v6.0-DD) was downloaded from https://www.axolotl-omics.org/dl/AmexG_v6.0-DD.fa.gz along with a gtf (AmexT_v47-AmexG_v6.0-DD.gtf) https://www.axolotl-omics.org/dl/AmexT_v47-AmexG_v6.0-DD.gtf.gz, which required the removal of white space for use with Cell Ranger (v7.0.0) mkref. Cell Ranger count was run on each library individually, resulting in three position\u2013sorted BAM files from samples D_1, L_1, and M_1. BAM files were merged using synth_pool.py from Vireo which was filtered using BCFtools v1.11.FASTQ files from a previously published axolotl om Vireo using th\u201cMAF \u2265 0.05\u201d -Oz --output ddMale.common_maf0.05.vcf.gz ddMale_to_AmexGv6.vcf and then sorted:bcftools filter --include bcftools sort -Oz ddMale.common_maf0.05.vcf -o sorted.ddMale.common_maf0.05.vcf.gzBarcodes.tsv files were obtained from filtered outputs of Cell Ranger count for each library. Doublet rate (-d) was set to 0.1 and --randomSEED 50.python synth_pool.py -s /D_1/outs/possorted_genome_bam.bam,/L_1/outs/possorted_genome_bam.bam,/M_1/outs/possorted_genome_bam.bam /D_1/filtered_feature_bc_matrix/barcodes.tsv,/L_1/outs/filtered_feature_bc_matrix/barcodes.tsv,/M_1/outs/filtered_feature_bc_matrix/barcodes.tsv -d 0.1 -o pooled_bam -p 1 -r sorted.ddMale.common_maf0.05.vcf.gz --randomSEED 50.Note: we only expected the troublet portion of souporcell to be capable of detecting heterotypic doublets, so for downstream analysis of this synthetically pooled data, we removed all homotypic doublets.The pooled BAM was indexed using SAMtools index-c which made a .csi index and was renamed to have a .bai file extension for use in souporcell. Souporcell was run using souporcell_pipeline.py with inputs: merged BAM output from synth_pool.py, the output barcodes_pool.tsv from synth_pool.py, the genome fasta (AmexG_v6.0-DD.fa), N = 3, VCF file ddMale.common_maf0.05.vcf.gz, and --skip_remap SKIP_REMAP.exec Demuxafy.sif souporcell_pipeline.py -i pooled.bam -b barcodes_pool.tsv -f AmexG_v6.0-DD.fa -t 20 -o output -k 3 --skip_remap SKIP_REMAP --common_variants ddMale.common_maf0.05.vcfsingularity SRR12507774\u2013SRR12507781), AGM3_Mediastinal Lymph Node (SRR12507790\u2013SRR12507797), AGM5_Mediastinal Lymph Node (SRR12507806\u2013SRR12507813), AGM7_Mediastinal Lymph Node (SRR12507822\u2013SRR12507829), and AGM9_Mediastinal Lymph Node (SRR12507846\u2013SRR12507853). C. aethiops has a robust VCF file available (European Variation Archive: PRJEB7923) that needs to be used in conjunction with genome assembly Chlorocebus_sabeus 1.1 (GCA_000409795.2). This assembly did not have an annotation file available, and we generated a gtf file for this GenBank assembly using minimap2 to create BAMs with these high quality cells. BAMs were subsequently merged using Vireo:Because of low reads/cell in these libraries, we selected barcodes using Seurat with between 1,000 and 2,000 features. These filtered barcode files were used with 10x subset-bam \u201d -Obarcodes_pool.tsv --chrom $VIREO_OUTDIR -p 20 --minMAF 0.1 --minCOUNT 100 --gzipexec Demuxafy.sif vireo -c $VIREO_OUTDIR -o $VIREO_OUTDIR -N 5.singularity And then cellsnp-lite and Vireo were run:exec Demuxafy.sif samtools view -b -S -q 10 -F 3844 pooled.bam > $SCSPLIT_OUTDIR/filtered_bam.bamsingularity exec Demuxafy.sif samtools rmdupsingularity $SCSPLIT_OUTDIR/filtered_bam.bam$SCSPLIT_OUTDIR/filtered_bam_dedup.bamexec Demuxafy.sif samtools sort -osingularity $SCSPLIT_OUTDIR/filtered_bam_dedup_sorted.bam$SCSPLIT_OUTDIR/filtered_bam_dedup.bamexec Demuxafy.sif samtools index $SCSPLIT_OUTDIR/filtered_bam_dedup_sorted.bamsingularity exec Demuxafy.sif freebayes -fsingularity $SCSPLIT_OUTDIR/filtered_bam_dedup_sorted.bam >GCF_000409795.2_Chlorocebus_sabeus_1.1_cds_from_genomic.fna -iXu -C 2 -q 1 $SCSPLIT_OUTDIR/freebayes_var.vcfexec Demuxafy.sif vcftools --gzvcfsingularity $SCSPLIT_OUTDIR/freebayes_var.vcf --minQ 30 --recode --recode-INFO-all --out $SCSPLIT_OUTDIR/freebayes_var_qual30exec Demuxafy.sif scSplit count -csingularity $SCSPLIT_OUTDIR/freebayes_var.vcf -v$SCSPLIT_OUTDIR/freebayes_var_qual30.recode.vcf -i$SCSPLIT_OUTDIR/filtered_bam_dedup_sorted.bam -b barcodes_pool.tsv -r$SCSPLIT_OUTDIR/ref_filtered.csv -a $SCSPLIT_OUTDIR/alt_filtered.csv -o $SCSPLIT_OUTDIRexec Demuxafy.sif scSplit run -rsingularity $SCSPLIT_OUTDIR/ref_filtered.csv -a $SCSPLIT_OUTDIR/alt_filtered.csv -n $N -o $SCSPLIT_OUTDIRexec Demuxafy.sif scSplit genotype -rsingularity $SCSPLIT_OUTDIR/ref_filtered.csv -a $SCSPLIT_OUTDIR/alt_filtered.csv -p $SCSPLIT_OUTDIR/scSplit_P_s_c.csv -o $SCSPLIT_OUTDIRexec Demuxafy.sif bash scSplit_summary.shsingularity $SCSPLIT_OUTDIR/scSplit_result.csvscSplit:Pleurodeles SuperTranscriptome FASTA and gtf files to produce a Cell Ranger compatible reference. Cell Ranger 7.0.0 count command was then used to map and count reads over the transcriptome for the three transgenic animal scRNA-seq dataset.Cell Ranger 7.0.0 mkref command was used with the above listed Preprint) singularity image (image version 1.0.3). The remapping and variant calling stages of souporcell were run externally because of problems with timeouts on the remapping process with the large salamander transcriptome, and issues with the souporcell internal freebayes command failing. The VCF from freebayes was then used in souporcell pipeline with the --skip_remap SKIP_REMAP and--common_variants ${VCF}. Full scripts used for souporcell processes are included below. Summary of souporcell run details can be found in Souporcell -related export\u201c${DemuxSoupDir},${MappingAnalysisDir},${FastaDir}\u201dSINGULARITY_BIND = exec${DemuxSoupDir}Demuxafy.sif renamer.py --bam $BAM --barcodes $BARCODES --out ${OutputName}.fqsingularity exec${DemuxSoupDir}Demuxafy.sif minimap2 -ax splice -I 9 G -t 20 -G50k -k 11 -K 50M -w 15 --sr -A2 -B8 -O12,32 -E2,1 -r200 -p.5 -N20 -f1000,5000 -n2 -m20 -s40 -g2000 -2K50m --secondary = no ${FASTA}${OutputName}.fq > minimap.samsingularity exec${DemuxSoupDir}Demuxafy.sif retag.py --sam minimap.sam --out minimap_tagged.bamsingularity exec${DemuxSoupDir}Demuxafy.sif samtools sort minimap_tagged.bam > minimap_tagged_sorted.bamsingularity exec${DemuxSoupDir}Demuxafy.sif samtools index minimap_tagged_sorted.bamsingularity #freebayes run:exec${DemuxSoupDir}Demuxafy.sif freebayes -f ${FASTA} -iXu -C 2 -q 20 -n 3 -E 1 -m 30 --min-coverage 6 minimap_tagged_sorted.bam > free.vcfsingularity #souporcell run.${CurrentAnalysisDir}free.vcfVCF = exec${DemuxSoupDir}Demuxafy.sif souporcell_pipeline.py -i ${CurrentAnalysisDir}minimap_tagged_sorted.bam -b ${BARCODES} -f ${FASTA} -t 20 -o ${OutputName} -k $N --skip_remap SKIP_REMAP --common_variants ${VCF}singularity Pleurodeles and Notophthalmus dataset was then mapped to this dual species index using Cell Ranger 7.0.0 count command. In addition, the Cell Ranger 7.0.0 multi-command was used to assess multiplexing Cellplex information for all cells in the same dataset. For the Cell Ranger multi-command, the following flags were used: . Souporcell demultiplexing was run identically to above on the Pleurodeles only samples but with N = 4, and the relevant FASTQ and dual species reference transcriptome FASTA files.A dual species Cell Ranger 7.0.0 reference was made using the SuperTranscriptomes and corresponding gtf files (described above) from the two species using Cell Ranger mkref command. The two species, four animal, pooled scRNA-seq from SRR15502048, SRR15502052, and SRR15502056 (https://sra-pub-src-1.s3.amazonaws.com/SRR20079758/WT3_possorted_genome_bam.bam.1wget https://sra-pub-src-1.s3.amazonaws.com/SRR20079759/WT2_possorted_genome_bam.bam.1wget https://sra-pub-src-2.s3.amazonaws.com/SRR20079760/WT1_possorted_genome_bam.bam.1.wget C57BL/6 data were downloaded from 15502056 . For DBA15502056 possortehttps://cf.10xgenomics.com/supp/cell-exp/refdata-gex-mm10-2020-A.tar.gz) and Cell Ranger 7.0.0 count command was then used to map and count reads from each strain and dataset.bamtofastq v1.4.1 was used to generate FASTQ files for subsequent mapping. A Cell Ranger reference was obtained from the 10x Genomics website . To enable more widespread use, we now stabilized memory use throughout the pooling and added an option (--noregionFile) which can pool in the absence of a VCF file. This means that species that do not possess a VCF file can still do the in silico ground truth benchmarking we performed in this study. A pull request has been initiated to propagate the changes to Vireo\u2019s main GitHub repository (https://github.com/single-cell-genetics/vireo/pull/81). The modified version was used to pool the mouse data below. Note: if the pull request is accepted then calling the synth_pool.py script from Vireo\u2019s GitHub repository will in fact be this modified script with new options added. This script can produce identical pooled BAM files to the original.To pool bams, we modified the original synth_pool.py script from Vireo\u2019s GitHub repository which was memory intensive and only allowed for pooling in the presence of a VCF file *100. For Vireo, the \u201cfinal donor size\u201d numbers were used and percentage assigned calculated by ((donor0+donor1+donor2)/unassigned)*100.Vireo and souporcell were run as per the previous species and assignment percentages for souporcell were assessed via awk -F Preprint) singularity container (image version 1.0.3). SNP numbers in each VCF were counted using: grep \u201c##\u201d VCFname | wc -l.A summary figure reviews the computational details used to run souporcell on the above datasets . OverallPleurodeles and dual species datasets, the first two steps of the souporcell pipeline were run separately and then the output from these was introduced back into the souporcell pipeline for completion . Seurat was used to import and analyze single-cell gene expression data for all datasets and to analyze the multiplexing capture data for the dual-species Cellplex (CMO)-labeled dataset.A two part analysis in R and then Python was used to evaluate the efficacy of souporcell demultiplexing for each dataset. Scripts in R (version 4.1.2) primarily using Seurat (version 4.1.0) were usePreprint) and upset plots , and included R version 4.1.2 (2021-11-01).Session info including package numbers for R analyses are embedded in the GitHub page by a demultiplexing method. Cell ID value for each bin was then plotted against average total mapped reads. For datasets including fluorescent transgenic lines, binned data are also colored by the average number of summed mapped fluorescent reads per cell.Bar plots: filtered datasets were subset by the animal or animal group assignment from each demultiplex method being used to benchmark souporcell results. Within those subsets, the total cell quantity of cells assigned to each identity by souporcell was plotted (left plots). Alternatively, within each benchmarking demux result subset, the percentage of cells assigned to each identity by souporcell was calculated by dividing by total cells assigned to that identity by the benchmarking demuxer and multiplied by 100.VCF files were generated from 10x BAMs and the species-specific reference using VCFtools v1.11 with:$GENOME -b bamlist --threads 10 | bcftools call -m -Oz -f GQ --threads 10 -o allsites.vcfbcftools mpileup -f SNP density per kilobase was then calculated using VCFtools v 0.1.15:vcftools --SNPdensity 1000 --gzvcf allsites.vcf.gz\u201c{ total + = $4; count++ } END { print total/count }\u201d out.snpdenThis outputs an out.snpden file, and average SNP density across all sites was calculated using: awk E-MTAB-12186 for three animal pooled Pleurodeles splenocyte scRNA-seq and ArrayExpress accession E-MTAB-12182 for four animal pooled Pleurodeles and Notophthalmus splenocyte scRNA-seq. Code used to analyze the data are present in the Materials and Methods section, in linked Colab notebooks, or via GitHub (https://github.com/RegenImm-Lab/SNPdemuxPaper) All other data used in the study were from previously published works of which accessions are noted in the Materials and Methods section.Data are available on ArrayExpress with accession"} +{"text": "Diabetes is a severe challenge to global public health since it is a leading cause of morbidity, mortality, and rising healthcare costs. 3.0 million Ethiopians, or 4.7% of the population, had diabetes in 2021. Studies on the chronic complications of diabetes in Ethiopia have not been conducted in lower-level healthcare facilities, so the findings from tertiary hospitals do not accurately reflect the issues with chronic diabetes in general hospitals. In addition, there is a lack of information and little research on the complications of chronic diabetes in Ethiopia. The objective of this study was to assess the degree of chronic diabetes complications and associated factors among diabetic patients presenting to general hospitals in the Tigray area in northern Ethiopia.As part of a multi-centre cross-sectional study, 1,158 type 2 diabetes (T2D) patients from 10 general hospitals in the Tigray region were randomly chosen. An interviewer-administered questionnaire, a record review, and an SPSS version 20 analysis were used to collect the data. All continuous data were presented as mean standard deviation (SD), while categorical data were identified by frequencies. Using a multivariable logistic regression model, the factors associated with chronic diabetes complications among T2D diabetic patients were found, and linked factors were declared at p 0.05.Fifty-four of people with diabetes have chronic problems. Hypertension (27%) eye illness, renal disease (19.1%), and hypertension (27%) eye disease were the most common long-term effects of diabetes. Patients with chronic diabetes complications were more likely to be older than 60, taking insulin and an OHGA , having diabetes for more than five years, taking more than four tablets per day , and having high systolic and diastolic blood pressure. Patients with government employment , antiplatelet drug use , and medication for treating dyslipidemia , all had a decreased chance of developing a chronic diabetes problem.At least one chronic diabetic complication was present in more than half of the patients in this study. Chronic diabetes problems were related to patients\u2019 characteristics like age, occupation, diabetes treatment plan, anti-platelet, anti-dyslipidemia medicine, duration of diabetes, high Systolic BP, high Diastolic BP, and pill burden. To avoid complications from occurring, diabetes care professionals and stakeholders must collaborate to establish appropriate methods, especially for individuals who are more likely to experience diabetic complications. Hyperglycemia is a metabolic condition associated with diabetes mellitus . There aDiabetes impacts people\u2019s functional skills and quality of life, leading to severe morbidity and premature mortality, and is one of the top public health concerns in the globe . It negaFor instance, studies on the prevalence of chronic diabetic complications revealed that 96% of patients had hypertension, 46% had peripheral neuropathy, 30% had neuropathy, and 7% had neuropathy encountered impotence and the Information on the prevalence of diabetes-related complications is essential to change diabetes management policies and practices to effectively control the disease. However, studies focusing on such topics are rare in Ethiopia ,20,21 anAlthough very few studies on chronic diabetes complications have been conducted in different parts of the country, there has not been a recent comprehensive study of outpatients in general hospitals. On the other hand, because those studies were conducted in tertiary hospitals, they were unable to provide a precise picture of diabetes complications at a lower level, such as in general hospitals. In addition, those studies were carried out among high-risk individuals, the study population of type 1 and 2 diabetes, assessing only acute complications, microvascular or macrovascular complications, and in some studies data were collected through document review only. Furthermore, it is essential to investigate diabetes complications and their contributing factors regularly to pot evolving patterns and formulate diabetes management strategies. Therefore, this study aimed to identify chronic complications related to diabetes and associated factors in general hospitals in the Tigray region of Northern Ethiopia.A multi-centre cross-sectional study was conducted in the Tigray region from September 2019 to January 2020. Tigray is one of the ten regional states of Ethiopia. The Ethiopian health care system is organized into three-tier: primary, secondary and tertiary levels of care. The primary level of care is provided at a primary hospital, health centre and health post. The Primary Health care Unit (PHCU) is composed of a health centre (HC) and five satellite health posts (HP). These facilities provide service to approximately 25,000 people. A primary hospital provides inpatient and ambulatory service to an average of 100,000 Population and has an inpatient capacity of 25\u201350 beds. A general hospital provides inpatient and ambulatory services to an average of 1,000,000 people. A tertiary hospital serves an average of five million people. It serves as a referral to the general hospital .In 2019/2020, in the Tigray region, there were 2 referral hospitals, 14 general hospitals, 24 primary hospitals, 230 health centres and 741 health posts. There were more than 310 ambulances and a well-established referral system. There were over 750 private health facilities in the private sector, ranging from drug vendors and clinics to general and specialized hospitals. There were more than 25,000 health workforce ranging from health extension workers to specialists and sub-specialists . This stAll type 2 diabetic (T2D) patients admitted to the study hospitals and those who attended diabetic clinics during the data collection period participated in the study. To be involved in the study participants had to be adult.All adult patients aged more than 18 years who were diagnosed to have T2D and had follow-up visits in the study hospitals for \u22651 year. The study did not include patients.All Patients who were pregnant and critically ill.20.535(1\u20130.535)/(0.03)2. 10% of the initial sample size was added for non-response rate and the final sample size was 1,168. Proportional allocation of participants was employed to allocate the sample size among the selected general hospitals based on caseload (The sample size (n) was estimated using a single population proportion formula proposed by Cochran with an caseload .Ten out of fourteen public general hospitals were selected through a simple random sampling technique, and all general hospitals were not included because of budget constraints. Participants were selected using a systematic random sampling method whereby the first patient was selected randomly from the first three by a lottery method, and the next patient was selected every three intervals until the required sample was attained.The data were collected using a pre-tested, interviewer-administered questionnaire that was developed based on relevant literature ,21,31,32The T2D patients attending 10 hospitals during the data collection period were approached by the data collectors, verified for eligibility and then, after informed consent was obtained, data were collected. Data was collected by ten BSc nurses who had either MPH or MSc with multilingual abilities and were supervised by the first and third authors. The type 2 diabetic patients were identified by examining their diagnosis as reported in the medical record. Moreover, clinical and chronic diabetic complications data were extracted from patients\u2019 medical records. The diagnosis of chronic diabetic complication was then confirmed by the physician.To assure data quality, training and orientation of the study were done for the data collectors and supervisors and the questionnaire was pre-tested and checked for its validity and reliability. The pretest of the questionnaire involved 2% of the sample size and it was carried out in Quiha general hospital two weeks before the actual data collection. The questionnaire was revised based on the pre-test results. The questionnaires were checked for completeness and consistency on the daily bases. The data was entered and cleaned, entered in the SPSS version 20 analyses were performed using. A binary logistic regression analysis model was used to identify factors associated with chronic diabetes complications. The Homer-Lemehow goodness-of-fit test was used to check the model fitness and the assumption of a P-value >0.05 was considered a good model fit. Independent variables with p < 0.20 during the bivariate analysis were then included in the multivariable logistic regression for further analysis to control confounding factors. Multicollinearity between independent variables was checked by using the tolerance test and variance inflation factor (VIF). P < 0.05 was considered the cut-off point for reporting an independent variable that shows a statistically significant association with the dependent variable in multivariate analysis. The strength of the association of factors with chronic diabetes complications was demonstrated by computing the adjusted odd ratio (AOR) and its 95% confidence interval (CI).Socio-demographic: age, sex, marital status, religion, ethnicity, educational status, occupation, residence, monthly income (UD), family history of diabetes and BMI.Clinical: diabetes treatment regimen, anti-platelet drug , anti-dyslipidemia drug, glucometer, duration of diabetes, Fasting Blood Glucose (FBG), Systolic Blood Pressure (SBP), Diastolic Blood Pressure (DBP), Pill burden, and high health care cost.Behavioural: adherence to a diabetic diet, saturated fat consumption, vegetable consumption, adherence to diabetic medication, self-blood glucose test, smoking, alcohol consumption, physical activity, and diabetes education.Chronic diabetes complications statusThe data about chronic diabetes complications were extracted from the patient\u2019s chart/medical record. Only chronic complications that developed after the diagnosis of T2D and could be attributed to diabetes were considered in this study.Hypertension: was defined as systolic blood pressure (SBP) \u2265 140 mmHg and/or diastolic blood pressure (DBP) \u2265 90 mmHg, and/or patient on antihypertensive therapy was taken as the hypertensive patient ,36.Coronary artery disease (CAD): The diagnosis criteria for CAD were either a patient with typical anginal pain or equivalent symptoms or an abnormal resting ECG or an asymptomatic patient with the abnormal stress test, either by ECG or echo or a nuclear perfusion imaging test .Peripheral vascular disease: The presence of intermittent claudication and/or an ABI value (< 0.9) in any limb was recorded as peripheral vascular disease .Neuropathy: was diagnosed if a change was found in two or more of the three items hypesthesia or anaesthesia in lower and upper limbs when the patient\u2019s lower limp was evaluated .Eye diseases: Eye diseases such as cataracts, glaucoma and diabetic retinopathy were identified based on the report of the ophthalmologist or optometrist from the dilated eye (fundus photography for retinopathy), and comprehensive eye examination, which was recorded on the patient\u2019s chart .Chronic Kidney disease: was diagnosed based on the presence of urinary albumin, and/or an abnormally high level of serum creatinine (low glomerular filtration rate) .Foot problem: The diagnosis of foot problem was made through foot examination for any abnormalities or all patients were asked about a history of foot ulcer, neuro ischemic foot, or amputation .Follow a healthy diet: consuming vegetables, beans and peas, fruit, whole grain, nut, and seeds, seafood, low-fat milk and milk product, and a moderate amount of lean meat, poultry, and egg ,42.High fat/oil consumption: eat or consume more than 10% of calories from saturated fat, which means more than 20 grams of saturated fat per day ,44.Alcohol consumption: Adult with diabetes who drinks alcohol should do so in moderation (no more than one drink per day for adult women and no more than two drinks per day for adult men). One drink is defined as 12 oz/355ml of beer, 5 oz /148ml glass of wine, or 1.5 oz/44ml of distilled spirit .Physical activity: At least 150 minutes per week of aerobic exercise, plus at least two seasons per week of resistance exercise, are recommended .Ethical approval to conduct the study was obtained from the Institutional Review Board (IRB) of Mekelle University (Ref No. ERC 1370/2019). The study received approval from the Tigray regional health bureau and permission from the Medical Directors of the 10 involved hospitals. The study was conducted following the declaration of Helsinki. Study participants were recruited voluntarily after they were informed about the study and that they can withdraw from the study at any stage thereafter they signed the consent form. All data were kept in a safe and secure place anonymously to ensure confidentiality and only the researchers had access to the data.Overall, a total of 1,168 diabetic individuals were eligible for the study, but only 1,158 of the participant\u2019s questionnaires were fit for final analysis, which makes the response rate 99.14 per cent. Thirty-four per cent of the participants had an age greater than 60 years, with a mean age of 55.9 (D\u00b1 11.9) years. Most of the participants were male (54%), Married (67%), Orthodox Christian (88%), Tigrian ethnicity (96.3%), and urban resident (72.3%). Majority of the participants: 50.5% had no formal education, 80.3% had no family history of diabetes, 29.7% were unemployed, 7.9% had a monthly income of $34.26-$171.16 and 76.5% had a BMI of <25 kg/m2 .Of the total patients included in this study, 78.9%, 11.7%, and 16.8% were taking oral hypoglycemic agent (OHGA), anti-coagulant drug, and anti-dyslipidemia drug respectively. The mean duration of diabetes was 6.3 (D \u00b1 4.6) years and 54.6% of participants had diabetes duration of < 5 years. Of all participants, 10.1% had a glucometer at home, 60.7% had FBG of >130.00 mg/dl, 25.8% had SBP of >149.00 mmHg and 9.0% had DBP of > 90.00 mmHg. From the total participants, 42.7% were taking >4 pills per day, 86.4% had a high health care cost, 69.7% consumed a high amount of saturated fat (>20 g per day), and 68.7% were taking less than the recommended amount of vegetable (<4 serving per week). Of all participants, 90.6% and 50.4% adhered to diabetes medication and a healthy diet. Of the total participants, nine out of ten test their blood glucose once per month, 6.2% ever smoke tobacco product, 12.4% consume more than moderate amount of alcohol (\u22653 drink per day). Moreover, of the total participants, 61.3% were active physically, and 76.5% were attending diabetes education at the time of follow visit .Overall, 54% of participants suffered from at least one chronic complication of diabetes. Among all participants who had the complication, 10.5% had a single complication, 16.9% lived with two and 26.8% with more than two types of complications respectively.Of all types of macrovascular complications, hypertension 27% was the most common but cerebrovascular disease 4.31% was the least common type. Out of all participants who had cerebrovascular complications, 0.3% had TIA and 4% had a history of stroke .The most common type of microvascular complication was ocular disease 22.62% , followed by kidney disease 19.17% and peripheral neuropathy11% respectively. Patients with renal complications consisted of 9% with microalbuminuria,4.0% with macroalbuminuria and 6% with high levels of serum creatinine. Ocular complications included cataracts, retinopathy, diabetes blindness and glaucoma, with magnitudes of 5%, 9%, 1% and 8% respectively. Of the total participant who had a foot disease, 4% had a diabetes-related foot ulcer, 1% had a foot amputation, 3.6% had ischemic pain, 1% had gangrene and 3% had an infection .The Homer-Lemehow goodness-of-fit test was done, and its result showed P = 0.0063, which was considered a good model fit. The odds ratio was calculated for factors found to be associated with chronic diabetes complications among type 2 diabetic patients. After considering all assumptions of binary logistic regression and the p-value (\u2264 0.05) in the bivariate analysis, fifteen variables were identified as candidates for analysis in the multivariable model. In the multivariable logistic regression analysis, nine variables were found to be factors associated with chronic diabetes complications at a 5% level of significance.The odds of a chronic diabetes complication was higher among patients age > 60 years ) than their counterpart, who took insulin and OHA had a higher chance of developing the complication than patients taking insulin injection only, with diabetes duration of \u2265 5 years, were at higher risk to develop the complication than patients with shorter diabetes duration and who took \u22654 pills a day had a greater risk to develop chronic diabetes complication than their counterpart . SimilarThis study aimed to assess the magnitude of chronic diabetes complications and associated factors among diabetic patients attending the general hospitals in the Tigray region, Northern Ethiopia. The overall magnitude of chronic diabetes complications in this study was 54.0% . This is consistent with studies from Bahir Dar, Northwest Ethiopia (54%), and China (52%) ,21. The However, our study showed that 34%, 45% and 77% of the study participants were >60 years of age, with \u2265 5 years of diabetes duration and normal BMI respectively. However, the result of this study is higher than a study in Gurage Zone, Southwest Ethiopia (46%). The probIn this study, 27.28% participants had diabetes-related hypertension which is in line with a study done in Jimma, Ethiopia(25%) . This isIn this study, peripheral vascular disease was seen among 9.13% participants, which is higher than the finding from a survey conducted in Sri Lanka (4.7%) ,58. The Coronary artery disease (CAD)occurred among 3.28% participants in this study, which is lower than studies done in Sri Lanka (11%), Saudi Arabia (23%), Bangladesh (26%), India (8%), Nepal (23%), and Iraq (15%) ,60,62\u201364Of the total participants in this study, 4.31% had a stroke, which is higher than studies done in Libya (1.9%), Saudi Arabia (0.19%), Nepal (1%) and Iraq (0.7%) ,51,54,56Peripheral neuropathy was seen among 11% participants; this finding is almost similar to that of studies conducted in India (11%) and West Ethiopia (10%) ,67. The This result is higher than the result of Saudi Arabia (1.4%) and Iraq (6.5%) ,56. The However, it is lower than the result of Sri Lanka (63%), Southwest Ethiopia (15%), Nepal (15%), Bangladesh (28%), India (19%), Tanzania (29%), and Egypt (22%) ,62,68,69In this study the magnitude of diabetes nephropathy was 19.17% this figure is in line with the study conducted in Ethiopia . Unlike In this study, retinopathy was seen in 9% (95% CI: 7.25\u201310.53) of participants, which is in line with studies conducted in Indonesia (7%), Tunisia (8.1%), Iraq (7.5%), and Ethiopia (10%) ,64,65,69However this result is lower than research findings done in Ethiopia (26%), Sri Lanka (26%), SSA (15%), Nepal (29%), Korea (38%), India (15.4%), Bangladesh (38%), Libya (31%), Tanzania (50%), Sudan (14%), and Egypt 21%) Reviewers' comments:Reviewer's Responses to Questions Comments to the Author1. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1:\u00a0PartlyReviewer #2:\u00a0Yes********** 2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1:\u00a0YesReviewer #2:\u00a0Yes********** 3. Have the authors made all data underlying the findings in their manuscript fully available?PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data\u2014e.g. participant privacy or use of data from a third party\u2014those must be specified. The Reviewer #1:\u00a0YesReviewer #2:\u00a0Yes********** 4. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.Reviewer #1:\u00a0NoReviewer #2:\u00a0No********** 5. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. Reviewer #1:\u00a0This article can be made simpler for the scientific community with some modifications and clarifications as below.Minor comments1. Consistently use either \u201cTigray\u201d or \u201cTigrai\u201d in the title, author affiliation, study area, and elsewhere.2. Change \u201cnorth Ethiopia / northern Ethiopia\u201d to \u201cNorthern Ethiopia\u201d throughout the manuscript3. Instead of saying \"an institutional-based study,\" say \u201ca multi-center cross-sectional study.\" since the study was conducted in ten public general hospitals.4. It is better to say 54% rather than 54.0%, and 27% rather than 27.0%.5. Throughout the manuscript, apply comma for numbers with more than two digits, like instead of saying 60 000\u2013100 000 people, better to say 60,000-100,000 people.6. Remove double full stops throughout the manuscript7. Since it is an international journal, better to change the income status that you mentioned in Ethiopian birr (ETB) to US dollar using the exchange rate at the date of data collection.8. I recommend you to avoid writing formulas for sample size calculations, it would be better to make it in sentence format the approach you applied for calculating the sample size than writing detailed mathematical equations.9. Remove the Hosmer-Lemeshow goodness-of-fit test result you mentioned in the method section and put it at the result section (particularly at the multivariable regression table), At the method section you have to mention what you have did not what you found.10. Add your response rate in the result section, as your study is a cross-sectional study11. I appreciate writing of the findings of the measure of effect (AOR with its 95% CI), make it consistent in the result section of the manuscript (you mentioned ), re-write it as you have rightly described in the abstract section.12. Avoid P-value if used 95% CI instead of writing like (), remove the P-value and re-write it as , and remove the hyphen (-), rather use comma for writing the 95% CI.13. Check the reference section, the majority of the citations are appropriate, but there are some references citrated in appropriately, check them again using software\u2019s or manually14. Make sure whether your questionnaire has three or four parts, there are inconsistencies in their parts mentioned in the data collection tool and measurement part.Major comments1. Rationale of the study: You mentioned that the previous studies had a critical methodological limitation, which hindered the scientific community to make any conclusion or judgment? How do you know whether the sample size was small, if appropriately calculated even 10 sample size can be enough? There are more than ten similar articles published elsewhere in the country, including in the region, what was the added value of your study? I would suggest you to re-write it again, considering convincible scientific argument.2. Did you assess the risk factors? What you assessed was factors associated with chronic diabetic complications, do you think that we can interchangeably use risk factors and factors associated? Can we assess risk factors using simple, classical cross-sectional study design? I recommend you to consistently use the term factors associated, not risk factors in your manuscript.3. Sample size calculation: It is appreciable that you have used 3% margin of error to maximize the sample size, and your response rate was 94.6%, but after adding 10% non-response rate, the final sample size should be 1,178 not 1,061.4. Rewrite incomplete sentences, as there are many incomplete sentences, check the spelling and grammar issues, please check again the whole manuscript for spelling and grammar issues (use of present or past tense), I am not comfortable with the write-up.Reviewer #2:\u00a0PLOS ONEPONE-D-22-12568Research ArticleMagnitude of Chronic diabetes complication and its associated factors among adultswith type 2 diabetes in Tigray region, northern EthiopiaBy: kalayou kidanu Berhe,Mekelle University College of Health SciencesMekelle, Tigray ETHIOPIADr Hussein Ismail, Reviewer report to PLOS one December 20221. There is a lot of English language errors that need to be corrected, I believe the manuscript needs a professional proofreading before publications. A lot of errors are identified as in the title Chronic .2. Moreover, there are a lot of abbreviation errors, that need to be corrected, e.g., in the abstract Bsc nurses, OHA, \u2026etc. The manuscript has a lot of abbreviation errors as well, e.g., IDF in the abstract.3. Title: Magnitude of Chronic diabetes complication and its associated factors among adultswith type 2 diabetes in Tigray region, northern Ethiopia.The authors stated that they studied 10 general hospitals out of 13 hospitals, and they did not included any referral or primary care centers. So, the selection is based on general hospitals only, there fore it should be mentioned in the title.My suggestion for the title:Magnitude of chronic diabetes complication and its associated factors among diabetic patients attendingthe general hospitals in Tigray region, northern Ethiopia.4. METHODS4.1. Why did not the author include all the 13 general hospitals? I was surprised of taking 10 hospitals and leaving 3 hospitals. Please explain.4.2. Sampling:The methods of sampling were explained efficiently. Although, the author in included p=0.535, (p=proportion of chronic diabetes complications) please include the refence you used that stated the complications proportion as 0.535. Sampling is a step the author did before the research; I am surprised that this proportion used in the sampling was 0.535 is the same as the magnitude of diabetes complications which was the main finding of this study. Please explain.4.3. Regarding the operational definitions, the definition of hypertension the author used was BP> 140/90, I checked the reference used and it was outdated. Reference number 30.30. Muxfeldt ES NAdR, Salles GF, Bloch KV. Demographic and clinical characteristics of hypertensive patients in the internal medicine outpatient clinic of a university hospital in Rio de Janeiro. Sao Paulo Med J Child Adolesc Behav. 2004;122:87-93.My suggestion: please update all the operational definitions according to the updated guidelines or manuscript. Regarding hypertension, you may use the American heart association guidelines 2017 or the European Society of cardiology (ESC) guideline 2018. I recommend the ESC because it agrees with the level of 140/90 that you chose.Moreover, according to guidelines: No caffeine, no smoking, no eating for at least 2 h before measurement. The author stated BP was measured after30 minutes after hot drink as coffee. Please, explain and what is the refence you used?5. RESULTS5.1. The author stated in the results\u2022 \u2022 in which 42.7%, 10.0% and 13.0% had chronic diabetes complicationrespectively.\u2022 had DBP of > 90.00 mmHg in which 6.0%, 32.2% , 19.7% and 6.6%participants had at least one chronic diabetes complication respectively.Suggestion:\u2022 Please specify each complication associated with these numbers.\u2022 Please apply this notion along the whole paper.5.2. Tables: The authors need to put all the abbreviations in the footnote related to each table. Some abbreviations are missing in the footnotes.6. DISCUSSIONThe discussion is well written.7. CONCLUSIONIt highlights the main findings and supported by the study results.8. ReferenceThe authors included a lot of outdated refences. As the refences included:\u2022 Reference 10: 1996\u2022 Reference 22: 1965Suggestion: updating the refences accordingly.Regarding recent refences: Only one reference (number 35) was published 2020.********** 6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.If you choose \u201cno\u201d, your identity will remain anonymous but your review may still be made public.Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Yes:\u00a0Zenawi Hagos Gufue, Adigrat University, EthiopiaReviewer #1:\u00a0Yes:\u00a0Hussein M. Ismail, MD CardiologyReviewer #2:\u00a0**********https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at\u00a0figures@plos.org. Please note that Supporting Information files do not need this step.While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool,\u00a0 11 Apr 2023RESPONSE TO REVIEWERS_______________________________________________________________________________________________________Response to Reviewer #1I. Minor comments 1. \u201cTigray\u201d is used consistently instead of \u201cTigrai\u201d in the title, author affiliation, study area, and elsewhere in the manuscript \u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026.2. \u201cnorth Ethiopia / northern Ethiopia\u201d was changed to \u201c Northern Ethiopia\u201d\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u20263. \"an institutional-based study,\" was replaced by \u201ca multi-center cross-sectional study\u201d\u2026\u2026\u2026\u2026\u2026.4. It was corrected as 54% from 54.0% and as 27% from 27.0% and other similar issues corrected throughout the document as per your suggestion .................................................................................................5. Comma was used for numbers with more than two digits \u2026\u2026\u2026... 6. Double full stops were removed\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026...7. The mentioned Ethiopian birr (ETB) in the Monthly income status section was changed in to US dollar based on average exchange rate of 2019\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026(Page 9)8. The formula for sample size calculation was removed and rewrote in sentence format as per your recommendation and checked using StatCalc for population survey via Epi Info 7.0 software\u2026\u2026\u2026..9. The result of Hosmer-Lemeshow goodness-of-fit test was removed from the method section and placed at the result section (Factors associated with chronic diabetes complication & table3 foot note), \u2026\u2026\u2026.(Page11)10. Result of response rate is included/added in the result section of the manuscript\u2026\u2026\u2026\u2026\u2026\u2026\u2026.........(Page 8)11. The multi variable analysis result was re-wrote as per your recommendation\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026...12. In Abstract & result section P-value was removed and re-wrote as , and the hyphen (-) was removed instead comma was use for writing the 95% CI\u2026\u2026..............................13. In appropriately cited references were checked & corrected via endnote software ...\u2026\u2026\u2026\u2026\u2026\u2026(Page 18-21)14. The questionnaire has four parts, then corrected as \u201c it has four parts\u201d\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026.\u2026\u2026\u2026.....(Page 5)_______________________________________________________________________________________________II. Major comments1. Rationale of the study: revised and additional scientific arguments are included \u2026\u2026\u2026\u2026\u2026\u20262. The term \u201cRisk factor\u201d was replaced by \u201cfactors associated\u201d and used consistently throughout the manuscript \u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u20263. Sample size calculation: \u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026...The required sample size (n) was estimated manually using a single population proportion formula and cheeked using STATCALC for population survey via Epi-info version 7 software with assumptions of 95% CI (z = 1:96), d=0.03, and P=0.535.Therefore the initial sample size was 1061.882 ( ni = (Z1-\u03b1/2)2p (1\u2013p)/d2= (1.96)20.535(1-0.535)/(0.03)2 ). However, a refusal rate of 10% (1061.882*0.1) =106.1882 was added and gives a final sample size of 1,168.0702 , Accordingly10 questioners were excluded because of gross incompleteness and 1,158 participants\u2019 questioner were fit for final analysis which makes response rate of 99.14 % \u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026 4. Incomplete sentences were re-wrote, spelling and grammar issues were checked and corrected\u2026 __________________________________________________________________________________________________________________________________________________________________________________Response to Reviewer #2: Title & abstract 1. Gross English language Proofreading was done to correct errors of Spelling, grammar, punctuation and statement construction throughout the manuscript\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026. 2. To avoid confusion with other similar abbreviations the expanded form of the abbreviations were included e.g. IDF , Bsc is corrected as BSc and other abbreviations errors corrected accordingly \u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026.. 3. Research Title was modified as \u201cMagnitude of chronic diabetes complication and its associated factors among diabetic patients attending the general hospitals in Tigray region, northern Ethiopia\u201d based on your suggestion \u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026(page-1)________________________________________________________________________________________________4. Method and materials 4.1 Study Area:: the study was done at 10 general hospitals out of all 14 general hospitals, all general hospitals were not included because of the four hospitals were exclude randomly due to budget constraint / logistic issue\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026...(Page4) 4.2 Sampling: A reference (ref.no 21) for P=0.535, (p=proportion of chronic diabetes complications) was included and this figure is used to calculate the sample size which was done in 2015 which was before our study conducted (2019/20) and taken from a study done at FelegeHiwot referral hospital, Bahardar, Amhara region, Northwest Ethiopia\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026..(Page5)As you mention, by chance the chronic diabetes complication proportion of the study done at FelegeHiwot referral hospital (P=0.535) which we use for sample size calculation is similar to our finding (P=0.54). This could occur because of similarity in socio-demographic characteristics, poor glycemic control as a result of poor adherence to diabetes self-management recommendations\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026 Therefore, the finding of this study But both studies have difference in many things such as sample size , study facility , study area (Amhara region vs. Tigray region) and study period (2015 Vs. 2019/20)\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026.... (Page4-6)________________________________________________________________________________________________4.3 Operational definition: 4.3.1 Definition of Hypertension: The reference was updated and replaced by the reference that you recommend \u201cEuropean Society of cardiology (ESC) guideline 2018\u201d \u2026\u2026\u2026\u2026. 4.3.2 Definition of other chronic diabetes complications: all definition/ diagnostic criteria of other chronic diabetes complications were updated according to the updated guidelines or manuscript based on your suggestion\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026.4.3.3 Data collection and measurement(BP): the timing of BP measurement related to coffee consumption was revised as 1-2 hours and reference also included \u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026................................................................... (Page 6) ___________________________________________________________________________________________5. Result 5.1 Socio-demographic, Clinical and behavioral characteristics: In this section we try to explain the findings based on the cross-tabulation analysis results but as mentioned in your review \u2022 In which 29.4%, 34.0%, 47.1% and 52.4% participants had chronic diabetes complication respectively\u201d\u2022 In which 42.7%, 10.0% and 13.0% had chronic diabetes complication respectively.\u2022 In which 6.0%, 32.2%, 19.7% and 6.6% participants had at least one chronic diabetes complication respectively.Those are findings of overall chronic diabetes complication in terms of socio-demographic, clinical and behavioral characteristics in the cross-tabulation analysis result but we observed such way of explanation may result in confusion for the reader, so to make it clear and simple we prefer to omit/ remove all cross-tabulation findings of chronic diabetes complication from the text explanation of socio-demographic, clinical and behavioral characteristics section but readers can get those findings from the Table 1 & 2 \u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026...(Page8-11)5.2 Tables: all the abbreviations related to each table were placed in the footnote\u2026\u2026 6. Discussion: Except English language Proofreading, revision was not done in this section because you mentioned as \u201cThe discussion is well written\u201d7. Conclusion: Revision was not done because you stated as \u201cIt highlights the main findings and supported by the study results\u201d8. Reference: Reference 10: was replaced with updated reference, Reference 22: was replaced with updated reference. Accordingly all references were updated as per your suggestion \u2026\u2026\u2026(Page 18-21)_______________________________________________________________________________________________AttachmentResponse to reviewers.docxSubmitted filename: Click here for additional data file. 16 May 2023
PONE-D-22-12568R1
Magnitude of chronic diabetes complications and its associated factors among diabetic patients attending the general hospitals in Tigray region, Northern Ethiopia
PLOS ONEDear Dr. Berhe,Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE\u2019s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.Please submit your revised manuscript by Jun 30 2023 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at\u00a0
A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.
If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.Please include the following items when submitting your revised manuscript:https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: We look forward to receiving your revised manuscript.Kind regards,Manal S. Fawzy, Ph.D., M.D.Academic EditorPLOS ONEJournal Requirements:Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article\u2019s retracted status in the References list and also include a citation and full reference for the retraction notice.[Note: HTML markup is below. Please do not edit.]Reviewers' comments:Reviewer's Responses to Questions Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the \u201cComments to the Author\u201d section, enter your conflict of interest statement in the \u201cConfidential to Editor\u201d section, and submit your \"Accept\" recommendation.Reviewer #2:\u00a0All comments have been addressedReviewer #3:\u00a0All comments have been addressed********** 2. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #2:\u00a0YesReviewer #3:\u00a0Yes********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #2:\u00a0YesReviewer #3:\u00a0Yes********** 4. Have the authors made all data underlying the findings in their manuscript fully available?PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data\u2014e.g. participant privacy or use of data from a third party\u2014those must be specified. The Reviewer #2:\u00a0YesReviewer #3:\u00a0Yes********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.Reviewer #2:\u00a0NoReviewer #3:\u00a0Yes********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. Reviewer #2:\u00a0First, the manuscript looks much better than the first version, thanks for the authors.Although, I do not think it is ready for publication.The most important is to revise the enligh language again, and the author should submit an offcial proofreading certifcate for the manuscript, if the Editorial Board advise on this regard, it will be very helpful. As I still see lot of English and Grammer mistakes, and the writing way is not quite professional.1. The abstarct/conclusion:Conclusion: In this study, the magnitude of chronic diabetes complication was higher because more than halfof the study participants had at least one complication.I donot understand this statement, the magitude is high than...what?Also, revise the conculsion2. The exclusion and inclusion criteria:The author put bothe the criteria togther, which is confusing. Please specifiy what are the inclusion criteria? and what are the eclusion criteria?3. Diagnosis of chronic diabetes complication:3.1. I suggest using this statment in stead of yoursCoronary artery disease (CAD): The diagnosis criteria for CAD were either a patient with typical anginal pai or equivalentsymptoms and an abnormal resting ECG or an asymptomatic patient with abnormal stress test, either byECG or echo or a nuclear perfusion imaging test .3.2. Peripheral vascuar disease:The peripheral vascular disease defintion, it is advised to limit the ABI to less than 0.9 only, and delete more than 1.33.3. Neuropathyloss of sensitivity is mis nomer, replace with hyposthesia or anasthesia in lower and upper limbsspelling: limp ---> limb.4. Tables:4.1. Yes column: yes is wrongly written. Plz, correct.4.2: The abbreveiations shoud be consistent: if you use SBP for systolic blood pressure, you have to use DBP for diastolic blood pressue, plz be consistent along the whole manuscriptReviewer #3:\u00a0I thank the authors to conduct this interesting study at the local setting.All comments have addressed. No further comment required.********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.If you choose \u201cno\u201d, your identity will remain anonymous but your review may still be made public.Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Yes:\u00a0Hussein M IsmailReviewer #2:\u00a0Yes:\u00a0Mohammed Abdu SeidReviewer #3:\u00a0**********https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at\u00a0figures@plos.org. Please note that Supporting Information files do not need this step.While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool,\u00a0 17 Jun 2023RESPONSE TO REVIEWERSResponse to Reviewer #21. Abstract 1.1 Conclusion: revision was made based on your comment and it is revised as \u201cIn this study, more than half participants had at least one chronic diabetes complication\u201d\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026....(Page 2)2. Method 2.1 The exclusion and inclusion criteria: the inclusion criteria and exclusion criteria are separately written under eligibility criteria to avoid confusion\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026...(Page5) 2.2 Diagnosis of chronic diabetes complication2.2.1 Coronary artery disease (CAD): corrected as per your suggestion\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026....... was deleted from the definition, ABI to less than 0.9 only is used \u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026..(Page7)2.2.3 Neuropathy: misnomer of loss of sensitivity in the definition replaced with \u201c hyposthesia or anesthesia\u201d in lower and upper limbs & the work limp is corrected as limb\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026...(Page7)3. Results 3.1 The wrongly written the word yes in the column is corrected throughout the tables\u2026\u2026\u20263.2 All abbreviations were written consistently throughout the document \u2026\u2026\u2026...(Page 10-17) 4. Conclusion: corrected as \u201cIn this study, more than half participants had at least one chronic diabetes complication\u201d\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026.........(Page15)zainab.karim4@gmail.com) and Prof. Lilian T. Mselle (email: nakutz@yahoo.com ) from MUHAS, Tanzania. Uploaded as Supporting Information file 1 \u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026.5. Language editing: Our Manuscript was copyedit for language usage, spelling, and grammar by Zainabu Karim Mohamed from MUHAS, Tanzania :The authors have adequately addressed the concerns raised by the reviewers. Thank youReviewers' comments: 17 Aug 2023PONE-D-22-12568R2 The magnitude of chronic diabetes complications and its associated factors among diabetic patients attending the general hospitals in Tigray region, Northern Ethiopia Dear Dr. Berhe:I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department. onepress@plos.org.If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact plosone@plos.org. If we can help with anything else, please email us at Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staffon behalf ofProfessor Manal S. Fawzy Academic EditorPLOS ONE"} +{"text": "Current trends indicate that 63 countries are not on track to achieve the 2030 Sustainable Development Goals (SDG) target of a neonatal mortality rate \u226412 per 1000 live births, with 55 needing to double the annual rate of decline in neonatal mortality to do so . 2017. Available: http://bvsms.saude.gov.br/bvs/publicacoes/atencao_humanizada_metodo_canguru_manual_3ed.pdf. Accessed: 14 March 2023. [in Brasilian]."} +{"text": "An 82-year-old man was admitted for progressively worsening jaundice. Magnetic resonance cholangiopancreatography (MRCP) showed a stenosis of 2\u200acm in length in the common bile duct, with dilatation of the biliary system above . EndoscVideo\u20061\u2002Laser ablation under intraductal cholangioscopic guidance is performed for a cholangiocarcinoma.In recent decades, endoscopic radiofrequency ablation has been introduced into clinical use to palliate unresectable cholangiocarcinomaEndoscopy_UCTN_Code_TTT_1AR_2AF"} +{"text": "Interpretation of noncoding genomic variants is one of the most important challenges in human genetics. Machine learning methods have emerged recently as a powerful tool to solve this problem. State-of-the-art approaches allow prediction of transcriptional and epigenetic effects caused by noncoding mutations. However, these approaches require specific experimental data for training and cannot generalize across cell types where required features were not experimentally measured. We show here that available epigenetic characteristics of human cell types are extremely sparse, limiting those approaches that rely on specific epigenetic input. We propose a new neural network architecture, DeepCT, which can learn complex interconnections of epigenetic features and infer unmeasured data from any available input. Furthermore, we show that DeepCT can learn cell type\u2013specific properties, build biologically meaningful vector representations of cell types, and utilize these representations to generate cell type\u2013specific predictions of the effects of noncoding variations in the human genome. Second, some loci become epigenetically bookmarked at the stage of progenitor cells, making these loci unresponsive to trans-factors expressed later in development. See positions in the sequence and at the boInput gradient for each position in the sequence . Since the training dataset mostly contains sequences with a 200-bp intersection, 1 sample contains a target window of another sample. Therefore, there is a consistent increase of importance in the 200-bp intervals. There is also a noticeable drop of gradients centered around the positions in the sequence and at both ends of the input sequence, which reflects the training strategy. We would expect this effect to be alleviated by sampling training data with less consistency in sample overlap size.AP: average precision; bp: base pair; CNN: convolutional neural network; kbp: kilbase pair; MPI: mean feature positional information; SNV: single-nucleotide variant.giad015_GIGA-D-22-00103_Original_SubmissionClick here for additional data file.giad015_GIGA-D-22-00103_Revision_1Click here for additional data file.giad015_Response_to_Reviewer_Comments_Original_SubmissionClick here for additional data file.giad015_Reviewer_1_Report_Original_SubmissionFangfang Yan -- 6/6/2022 ReviewedClick here for additional data file.giad015_Reviewer_2_Report_Original_SubmissionYuwen Liu -- 6/27/2022 ReviewedClick here for additional data file.giad015_Reviewer_2_Report_Revision_1Yuwen Liu -- 12/20/2022 ReviewedClick here for additional data file.giad015_Reviewer_3_Report_Original_SubmissionBorbala Mifsud -- 7/4/2022 ReviewedClick here for additional data file.giad015_Supplemental_FileClick here for additional data file."} +{"text": "Here, we present a computational protocol to perform a spatiotemporal reconstruction of an epidemic. We describe steps for using epidemiological data to depict how the epidemic changes over time and for employing clustering analysis to group geographical units that exhibit similar temporal epidemic progression. We then detail procedures for analyzing the temporal and spatial dynamics of the epidemic within each cluster. This protocol has been developed to be used on historical data but could also be applied to modern epidemiological data.For complete details on the use and execution of this protocol, please refer to Galli et\u00a0al. (2023). \u2022Death-based clustering for epidemic reconstruction\u2022Protocol suitable for historical and modern epidemiological data analysis\u2022Spatial dynamics analysis of the epidemic on geographical maps Publisher\u2019s note: Undertaking any experimental protocol requires adherence to local institutional guidelines for laboratory safety and ethics. Here, we present a computational protocol to perform a spatiotemporal reconstruction of an epidemic. We describe steps for using epidemiological data to depict how the epidemic changes over time and for employing clustering analysis to group geographical units that exhibit similar temporal epidemic progression. We then detail procedures for analyzing the temporal and spatial dynamics of the epidemic within each cluster. This protocol has been developed to be used on historical data but could also be applied to modern epidemiological data. The protocol below shows a way for the use of historical data (such as death records) to reconstruct the spatiotemporal evolution of a past epidemic. As an example, we analyzed the information contained in XVII century death registers of the city of Milan to study the spatial and temporal diffusion of a plague epidemic inside the city.This section includes the minimal software and hardware requirements, the installation procedures, as well as the format of the files to be processed throughout this protocol.Timing: 30\u00a0min1.https://cran.r-project.org/ (Note that the protocol has been tested on R version 4.1.3 (2022-03-10)).R is a freely available language and environment for statistical computing and graphics.2.https://www.rstudio.com/.RStudio is a user-friendly integrated development environment for R.Optional: QGIS is a free and open-source geographic information systemhttps://www.qgis.org/en/site/.Timing: 10\u00a0min3.To run this protocol, it is required to previously install the R packages present in the \u201c> install.packages(\u201cname_of_the_package\u201d)Timing: 10\u00a0min\u20131 hYou can follow the protocol using your data or our sample dataset available on GitHub.4.https://github.com/RiccardoND/STAR_protocol_epidemic_reconstruction (GitHub: https://doi.org/10.5281/zenodo.8214153) or directly clone the GitHub repository (\u223c10 Mb), by running the following command on your terminal.You can download our sample dataset from https://github.com/RiccardoND/STAR_protocol_epidemic_reconstruction.> git cloneNote: This repository includes all the data and code necessary to reproduce the protocol. In this dataset, each row contains the total number of \u201ccases\u201d (or deaths) for each cause of death, for each day (or other unit of time) and for each spatial location.5.Format the table to obtain a single case in each row.> df <- read.csv> tab <- data.frame)> tab$count <- NULL> tab$Date <- as.Date(tab$Date)> tab <- tab #sort by dateNote: It is always possible to format your dataset in other ways or add other information, but the protocol must be adjusted accordingly.Note: a version of the dataset\u00a0already formatted as described in this step is available in the file TableS1_formatted.csv.To follow the protocol step-by-step, you must have the data formatted in a dataset in which each row contains a single case. If you have a dataset with cumulative numbers, see below on how to format it properly for the protocol.Timing: 20\u201330\u00a0min1.Prepare the workspace directory. Open RStudio and set the working directory path.> setwd(\"yourpath\")2.Load the dataset of cases/deaths and related information and format it as shown in step 5 of \u201c> head(tab)3.Create a time-series plot that shows the progression of the daily number of deaths for each cause of death (in our case Plague or not plague) .Figure\u00a01> library(ggplot2)>library(tidyverse)> tab %>%\u00a0group_by %>%\u00a0summarize(count\u00a0= n) %>%\u00a0ggplot)\u00a0+\u00a0geom_line(size\u00a0= 0.7)\u00a0+\u00a0geom_point(size\u00a0= 2)\u00a0+\u00a0theme_bw\u00a0+\u00a0scale_color_manual)\u00a0+\u00a0theme,\u00a0\u00a0axis.title.x\u00a0= element_text(size\u00a0= 15),\u00a0\u00a0axis.title.y\u00a0= element_text(size\u00a0= 15))\u00a0+\u00a0labs(y\u00a0= 'Number of Deaths')Note: The plot compares the progression of deaths caused by the disease of interest to deaths related to any other cause: it could show the presence of any kind of seasonality, and the completeness of the dataset. The absence of a signal in the period between 4 and 30 August is due to reasons not related to the analysis.To reconstruct the dynamics of an epidemic, it is helpful to start by plotting the number of deaths (or cases) that occurred at a specific time. In this case, we are going to build a time-series plot with two lines, one for deaths unrelated to the disease of interest, i.e., plague, and one for deaths related to plague. This step allows us to depict the temporal progression of the epidemic in the city and to detect any period with missing data or any other anomaly.Timing: 3\u20134 h4.Load the R packages in the current RStudio session.> library(tidyverse)> library(reshape2)> library(factoextra)> library(ade4)> library(vegan)> library(RColorBrewer)> library(inflection)>library(ggpubr)5.Load the table with one row per case, as shown in step 5 in the \u201c6.a.> tab_1630_peste <- droplevels)> head(tab_1630_peste)filter for the cause of death or disease of interest.b.> t_1630_peste <- as.matrix)build a matrix in which each column is a parish , and each row is a day .c.> days1630\u00a0<- seq(from=as.Date(\"1630-01-01\"), to=as.Date(\"1630-12-31\"), by=1)> absent_days <- as.Date)), origin=\"1970-01-01\")> t_1630_peste_absent <- matrix(ncol=ncol(t_1630_peste), nrow=length(absent_days))> t_1630_peste_absent[is.na(t_1630_peste_absent)] <- 0> row.names(t_1630_peste_absent) <- as.character(absent_days)> t_1630_peste_all <- rbindNote: Select your temporal period of interest. In the example dataset, the cases span the year 1630, thus our range is from the 1st of January 1630 to the 31st of December of the same year.Add to the matrix the days in which there are no recorded plague deaths.d.> t_1630_peste_all_ord <- t_1630_peste_allOrder the table chronologically.e.> peste_cum <- matrix, nrow=0)> for ) {\u00a0tmp <- colSums\u00a0peste_cum <- rbind}> peste_cum <- rbind> row.names(peste_cum) <-row.namescreate a cumulative matrix.f.> peste_cum_norm <- apply x/max(x))Normalize the number of deaths by dividing the number of daily deaths in each parish by the total number of deaths in that parish .g.> peste_cum_norm_m <- melt(peste_cum_norm)> colnames(peste_cum_norm_m) <- )> peste_cum_norm_m$Date <- as.Date(peste_cum_norm_m$Date)> Cumulative_curves <- ggplot)\u00a0+\u00a0geom_line\u00a0+\u00a0theme_bw\u00a0+\u00a0ylab(\"Cumulative relative frequency of plague deaths (%)\")Note: Some parishes may not have enough cases to build a reliable cumulative curve on the selected time range. We can choose a threshold under which the parishes will be excluded from the subsequent analyses. For example, we selected only the parishes with more than one plague death every two weeks during the epidemic.Plot the cumulative relative frequency curves of the parishes plague deaths .> peste_h.Note: The epidemic period is not simply the time between the first and last case, in fact, as in our data, there may be some isolated cases before or after the epidemic. Thus, it is important to analyze the epidemic curve. In our data, the epidemic begins around the middle of March and continues throughout 1630.Calculate the duration of the epidemic. To do so, we have to observe the epidemic curve produced in step 3 to manually determine the starting and ending period of the epidemic.i.> weeks_of_epidemic <- 42> death_count_thr <- weeks_of_epidemic/2#21Calculate the death threshold. To select parishes with more than 1 death every two weeks, we have to look for parishes with more than 21 deaths (week of the epidemics / 2), as explained in Galli et\u00a0al., 2023.j.> parr_sel <- peste_cum\u00a0>\u00a0death_count_thr> peste_cum_norm_sel <- peste_cum_norm> peste_cum_norm_melt <- melt(peste_cum_norm_sel)> colnames(peste_cum_norm_melt) <- c> peste_cum_norm_melt$Date <- as.Date(peste_cum_norm_melt$Date)> cumulative_curves_sel <- ggplot)\u00a0+\u00a0geom_line\u00a0+\u00a0scale_color_manual)\u00a0+\u00a0theme_bw\u00a0+\u00a0ylab(\"Cumulative relative frequency of plague deaths (%)\")Remove parishes with fewer cases than the threshold and plot only the selected cumulative curves .> parr_sBuild the cumulative curves of the plague deaths for each parish .7.Note: To cluster the parishes (or geographic units) in two or more groups, we are going to perform a k-means clustering on the result of the Principal Coordinates Analysis (PCoA) performed on the cumulative curves of plague deaths. In particular, we are going to compare the cumulative curves of the different parishes with each other to generate a distance matrix, which will be subjected to PCoA. Performing a PCoA on the dataset\u00a0allows us to reduce the dimension of our data and to visualize them in two (or three) dimensions. Then we can apply k-means clustering, an unsupervised clustering algorithm that groups a dataset into a specific number of clusters . The k-means algorithm will assign each observation (in our case each parish) to a cluster on the basis of their position on the PCoA space.,a.> dist <- dist(t(peste_cum_norm_sel))Calculate the Euclidean distance matrix between the cumulative curves of the different parishes.b.> pcoa <- cmdscale> plot(pcoa$eig)> pcoa <- cmdscaleNote: First, we must determine the best number of axes for the PCoA analysis on the basis of the eigenvalues. The plot shows that the eigenvalues drastically drop down for the first three dimensions.Perform the Principal Coordinates Analysis (PCoA) and make the scree plot .> pcoa fviz_nbclustNote: We determined the optimal number of clusters in the dataset using a popular cluster validation index: the average silhouette width method.,Determine the optimal number of clusters in which the parishes (or geographic units) can be divided .> fviz_nd.> f <- kmeans> clusters <- as.matrix(f$cluster)> clus <- clustersClusteringPerform the clustering analysis on the basis of the cumulative curves.8.Use the PERMANOVA test to determine if the separation between the clusters is statistically significant.> a <- adonis2> p_value <- a$`Pr(>F)`[1]> p_value[1] 0.0019.Plot the results of the PCoA and the k-means clustering .Figure\u00a06> plot_pcoa <- s.class,\u00a0\u00a0\u00a0col\u00a0= c,\u00a0\u00a0\u00a0cellipse\u00a0= 0,\u00a0\u00a0\u00a0sub\u00a0= paste,\u00a0\u00a0\u00a0xlim\u00a0= c)In this step, we are going to group cases/deaths based on the progression of the epidemic. In particular, we are going to analyze the specific cumulative epidemiological curves of each geographical unit. In the example dataset about the 1630 plague epidemic in Milan, the geographical units are the parishes in which the city was divided and where the person died . Other geographical units can be streets, houses, villages, districts, etc.Timing: 3\u20134 h10.Load the necessary R packages in the current RStudio session.> library(RColorBrewer)> library(inflection)> library(ggpubr)11.Color the cumulative relative frequency curves of plague deaths for each parish on the basis of their clusters .Figure\u00a07> peste_cum_norm_melt <- melt(peste_cum_norm_sel)> colnames(peste_cum_norm_melt) <- c> clusters <- as.matrix(f$cluster)> peste_cum_norm_melt$Date <- as.Date(peste_cum_norm_melt$Date)> peste_cum_norm_melt$Cluster <- as.character(clusters)> Cumulative_curves_sel_clusters <- ggplot)\u00a0+\u00a0geom_line\u00a0+\u00a0scale_color_manual)\u00a0+\u00a0theme_bw\u00a0+\u00a0ylab(\"Cumulative relative frequency of plague deaths (%)\")12.Save a table with the information about the clusters and the corresponding parishes to be used later for further analysis on the clusters.> palette <- c> tab_clusters <- data.frame(Parish\u00a0= row.names(clusters), Cluster\u00a0=\u00a0>\u00a0as.data.frame(clusters)$V1, Color\u00a0= palette[as.matrix(clusters)])> head(tab_clusters)> write.csv13.Note: As an example, we are going to determine: the first plague case for each parish, the inflection points of the cumulative curves, and the date at which the parishes of the two clusters reached 25%, 50%, 75%, and 100% of their total plague deaths.a.> peste_cum_norm_melt_first <- peste_cum_norm_melt> peste_cum_norm_melt_first_nodup <- peste_cum_norm_melt_first> peste_cum_norm_melt_first_nodup <- NULL> colnames(peste_cum_norm_melt_first_nodup) <- cCalculate the dates on which the parishes of the two clusters reached the first plague death.b.> peste_cum_norm_melt_25\u00a0<- peste_cum_norm_melt> peste_cum_norm_melt_25_nodup <- peste_cum_norm_melt_25> peste_cum_norm_melt_25_nodup <- NULL> colnames(peste_cum_norm_melt_25_nodup) <- cCalculate the dates on which the parishes of the two clusters reached 25% of total plague deaths.c.> peste_cum_norm_melt_50\u00a0<- peste_cum_norm_melt> peste_cum_norm_melt_50_nodup <- peste_cum_norm_melt_50> peste_cum_norm_melt_50_nodup <- NULL> colnames(peste_cum_norm_melt_50_nodup) <- cCalculate the dates on which the parishes of the two clusters reached 50% of total plague deaths.d.> peste_cum_norm_melt_75\u00a0<- peste_cum_norm_melt> peste_cum_norm_melt_75_nodup <- peste_cum_norm_melt_75> peste_cum_norm_melt_75_nodup <- NULL> colnames(peste_cum_norm_melt_75_nodup) <- cCalculate the dates on which the parishes of the two clusters reached 75% of total plague deaths.e.> peste_cum_norm_melt_100\u00a0<- peste_cum_norm_melt> peste_cum_norm_melt_100_nodup <- peste_cum_norm_melt_100> peste_cum_norm_melt_100_nodup <- NULL> colnames(peste_cum_norm_melt_100_nodup) <- cCalculate the dates on which the parishes of the two clusters reached 100% of total plague deaths.f.> infl_date_tab <- matrix)> colnames(infl_date_tab) <- c> for (i in 1:ncol(peste_cum_norm)){\u00a0col\u00a0= colnames(peste_cum_norm)[i]\u00a0infl_date <- as.Date(bede(as.numeric(as.Date(row.names(peste_cum_norm))), peste_cum_norm,0)$iplast, origin\u00a0= \"1970-01-01\")\u00a0infl_date_tab <- as.character(infl_date)\u00a0infl_date_tab <- col}Calculate the dates on which the cumulative curves of the parishes of the two clusters change concavity, corresponding to the epidemic peak .g.> all_tab_tmp <- merge> colnames[1:2] <- c> all_tab_tmp1\u00a0<- merge> all_tab_tmp2\u00a0<- merge> all_tab_tmp3\u00a0<- merge> all_tab_tmp4\u00a0<- merge> all_tab <- merge> head> all_tab$Inflection_date <- as.Date> all_tab2\u00a0<- melt)Merge all the data in one table.h.> all_tab2$Cluster <- factor, ordered\u00a0= TRUE)> my_comparisons <- list)> labels <- list#label name of facet> labels <- list> facet_labeller <- function{return}> Clusters_boxplot <- ggboxplot\u00a0+\u00a0scale_fill_manual)\u00a0+\u00a0geom_jitter)\u00a0+\u00a0facet_wrap\u00a0+\u00a0stat_compare_means\u00a0+\u00a0theme(legend.position\u00a0= \"none\")\u00a0+\u00a0ylab( \"Date\")\u00a0+\u00a0theme(strip.text.x\u00a0= element_text(size\u00a0= 8))Plot the results as boxplots and determine if the differences between the two clusters are statistically significant > gps <- read.csv> clusters <- read.csv(\"Clusters.csv\")Load the table with the epidemiological data, the table with the GPS information about the geographic units, in this case, the parishes, and the table with the clusters (see step 12).b.> df2\u00a0<- data.frame)> df3\u00a0<- df2 %>% select(-count)> tab <- left_join> tab$Date <- as.Date(tab$Date)> tab_cluster <- left_join> tab_cluster_2\u00a0<- droplevels))#drop rows without cluster total number of cases in the 2 clusters\u00a0= 7002Produce a summary that integrates all the information for your dataset.c.> tab_cluster_2$Weeks <- as.numeric)> tab_cluster_2$Cluster <- factor)> tab_cluster_3\u00a0<- tab_cluster_2 %>% filter(Death_cause\u00a0== \"Plague\")> cc_clusters <- tab_cluster_3 %>%\u00a0group_by %>%\u00a0summarize(count\u00a0= n) %>%\u00a0ggplot)\u00a0+\u00a0geom_line(size\u00a0= 0.7)\u00a0+\u00a0geom_point(size\u00a0= 2)\u00a0+\u00a0theme_bw\u00a0+\u00a0scale_color_manual)\u00a0+\u00a0theme(axis.title.x\u00a0= element_text(size\u00a0= 12),\u00a0\u00a0axis.title.y\u00a0= element_text(size\u00a0= 12))\u00a0+\u00a0labs(y\u00a0= 'Number of Deaths')Note: The number of color values assigned to the function \u201cscale_fill_manual\u201d must be the same as the number of clusters.Plot the weekly number of plague deaths for each cluster .> tab_clVisualize the epidemiological evolution of the epidemic in the parishes of the two clusters.The parishes, and therefore our cases, have been clustered on the basis of the temporal progression of the epidemic. Now we can analyze what are the differences and similarities between the clusters.Timing: 4 h15.Load the R packages in the current RStudio session.> library(png)> library(grid)16.Produce a summary table that integrates all the information for your dataset (see step 14b).> head(tab_cluster)# A tibble: 6 x 7# Groups: Parish, Cluster, Color, Death_cause, Latitude [6]Note: Latitude and Longitude are the columns used to indicate geographic coordinates.17.Filter the dataset to remove all the parishes without geographic information.> df <- tab_cluster %>% filter(!is.na(Latitude))18.Create a new gray cluster (Cluster \u201c0\u201d) for the unassigned parishes .> df$Cluster[is.na(df$Cluster)] <- 0> df$Color[is.na(df$Color)] <- \"gray\"19.Summarize the number of cases for each parish, cluster, death cause, and coordinates.> df2\u00a0<- df %>% group_by %>%summarize(Count\u00a0= n)> head(df2)# A tibble: 6\u00a0\u00d7\u00a07# Groups: Parish, Cluster, Color, Death_cause, Latitude [6]20.Clean the table removing non-plague-related deaths.> df2$Cluster <- factor,ordered\u00a0= TRUE)> peste_clus_gps <- df2 %>% filter(Death_cause\u00a0== \"Plague\")> peste_clus_gps$Death_cause <- NULL> peste_clus_gps <- peste_clus_gps> peste_clus_gps$Latitude <- as.numeric(peste_clus_gps$Latitude)> peste_clus_gps$Longitude <- as.numeric(peste_clus_gps$Longitude)> peste_clus_gps$Count <- as.numeric(peste_clus_gps$Count)21.png image and transform it into a raster image graphical object.Load your map stored as a > map <- readPNG(\"positron_darker_2023.png\")> map_2_plot <- rasterGrob22.Annotate the GPS coordinates of the four corners of the map image.Note: we can retrieve our base map image from QGIS by cropping the area of interest and annotating the GPS coordinates of the margins of our crop. This is essential to plot the points in their exact location on the map: the map itself needs to be referenced to the real GPS coordinates so that the point can be plotted using the real GPS coordinates available for each parish.> gps_map <-data.frame,\u00a0\u00a0\u00a0Y\u00a0= c,\u00a0\u00a0\u00a0fid\u00a0= c,\u00a0\u00a0\u00a0crop\u00a0= c)> xmin <- gps_map #Bottom Left margin> ymin <- gps_map #Bottom Right margin> xmax <- gps_map #Top Right margin> ymax <- gps_map #Top Right margin23.Find the aspect ratio of the png image and use the \u201czoom\u201d variable to scale the output file size.> img_width <- ncol(map)> img_height <- nrow(map)> aspect_ratio <- img_width/img_height> zoom <- 15Note: With the aspect ratio of the map and a multiplicative factor (zoom variable), it is possible to save the final map image at different sizes maintaining good resolution. The reasonable value of the zoom variable depends greatly on the size (in pixels) and the shape of the initial crop of the map.24.Plotting the parishes over the map according to the GPS coordinate. The size of the points is associated to the number of plague deaths and the color represents the cluster of the parishes.Note: The aspect ratio of the map must be maintained. To do so, we have to fix the width and height of the final plot file. RStudio may automatically plot the map with an incorrect aspect ratio. We strongly recommend saving the png file using the provided commands instead.> p <-ggplot)+\u00a0annotation_custom+\u00a0geom_point)+\u00a0scale_color_manual)+\u00a0labs\u00a0+\u00a0xlim+\u00a0ylim+\u00a0theme_classic+\u00a0theme,\u00a0\u00a0legend.title=element_text(size=zoom/2),\u00a0\u00a0axis.ticks=element_blank,\u00a0\u00a0axis.title.x=element_blank,\u00a0\u00a0axis.title.y=element_blank,\u00a0\u00a0panel.grid.major\u00a0= element_blank,\u00a0\u00a0panel.grid.minor\u00a0= element_blank,\u00a0\u00a0panel.background\u00a0= element_blank,\u00a0\u00a0legend.position\u00a0= c )25.png and scale it using the zoom variable Generate a visualization of the distribution of the parishes of the different clusters on a map using the coordinates and the information about the parishes.Parishes localization on the map of the city of Milan. In blue, parishes of cluster 1; in red, those of cluster 2; in gray, parishes with less than 21 total plague deaths . The size of the points represents the total number of deaths related to the plague experienced by the parish.This protocol has been designed to reconstruct the epidemiological dynamics of an epidemic using the recorded deaths and their geographical location.The first outcome consists of a time-series plot where it is possible to compare the temporal evolution of the epidemics against the incidence of deaths unrelated to the disease of interest .Then, the protocol performs a clustering analysis on the geographical units on the basis of their cumulative relative frequency of plague deaths. The final outcome of this step is the Principal Coordinates Analysis (PCoA) plot, where the parishes have been colored on the basis of the clusters .Once we find the clusters, we can start to analyze the temporal dynamics of the epidemic in each of them; in Lastly, we analyze the spatial dynamics clusters by visualizing the geographical position of the parishes on a map: The clustering analysis used in this protocol does not rely on any kind of geographic information. Although this approach is advantageous when we are dealing with historical data for which geographic information is rarer or imprecise, this approach may be limiting for the analysis of datasets in which this information is available and reliable. In this case, the clustering analysis should consider the implementation of geographic information.In step 5 of \u201cOpen the csv file with a spreadsheet application .In this application, we can easily apply any adjustment and export the spreadsheet as an xlsx or a csv file. The file should be correctly formatted and ready to be imported in R.> df <- read.csvFor xlsx files (Excel format file):> install.packages(\"readxl\")> library(readxl)> df <- read_xlsx(\"TableS1.xlsx\")For csv files, we can import the table in R using the same lines of code in step 5 of \u201cAt step 5 in the \u201c> install.packages(\"vroom\u201d)> library(vroom)> library(tidyverse)> df <- vroom> tab <- as_tibble)> tab$count <- NULL> tab$Date <- as.Date(tab$Date)> tab <- tab #sort by dateWhen a table is too large , the functions \u201cread.csv\u201d (for \u201ccsv files\u201d) or \u201cread.delim\u201d (for tab delimited files) may take too much computational time and RAM to maintain the table in the environment. In this case, the \u201ctibble\u201d class can help reduce time and RAM needed to handle the dataset. Thus, to upload a large table it is better to use another R library such as \u201cvroom\u201d. The following code must be substituted to step 5 of the \u201cAt step 6, parishes name contains spelling errors.> as.matrix(names(table(tab$Parish)))It is possible that the dataset contains spelling errors, or the same parish written in different ways . In these cases, R does not consider them as the same parish. To find possible errors we can list all the parishes and manually check for errors.> tab$Parish <- gsubThen we can correct them using R. As an example, consider a situation in which the parish of \u201cS. Bartolomeo\u201d is written in two different ways: \u201cS. Bartolomeo\u201d and \u201cS. bartolomeo\u201d.In this way we substituted all the cells containing \"S. bartolomeo\" with \"S. Bartolomeo\".At step 7c, the silhouette analysis indicates an optimal number of clusters higher than two.The protocol has been developed on the example dataset in which silhouette analysis finds that two clusters is the optimal number to classify the parishes. For this reason, this protocol can be directly applied only when the two clusters are found to be optimal.Whenever the protocol finds that more than two clusters are needed to describe the dataset, the user should slightly modify the commands to be able to compare the clusters .In the section \u201cGPS information is not always available, particularly when dealing with old datasets that may refer to particular geographic locations that changed names or disappeared over time.The clustering approach applied in this protocol does not rely on any type of geographical information. Thus, you can follow the protocol from step 1 till step 14 without any specific geographic information .Further information and requests for resources and reagents should be directed to and will be fulfilled by the lead contact, Riccardo Nodari (This study did not generate new unique reagents."} +{"text": "The software and firmware are designed for Opal Kelly FPGA modules, yet the Python developments are generally useful to organize communication with peripheral chips.We are developing a data acquisition system (DAQ) for real-time feedback that uses FPGA-based control of and acquisition from various electronic chips, or peripherals. Because these peripherals communicate over multiple protocols through an FPGA, we designed pyripherals retains as a dictionary key. When passed the location of the data field, parameterized functions automatically format the data or command as required by the communication interface used by the peripheral. The assembled message is passed to the appropriate hardware controller responsible for low-level communication with the peripheral. In this solution, the addressing, bit indexing, and formatting are handled by pyripherals before the message is sent over the Opal Kelly FrontPanel API to a hardware-level communication controller on an Opal Kelly FPGA for rapid readout of photon return time histograms. Similar to pyripherals, the Python package registerMap = the bit index of the upper end of the data field in the register. Ex. in a 32-bit register where the data field is located in the last 4 bits of the register, Bit Index (High) would be 31.Bit Index (Low) = the bit index of the lower end of the data field in the register. Ex. in a 32-bit register where the data field is located in the last 4 bits of the register, Bit Index (Low) would be 28.pyripherals then reads the spreadsheet, referred to as a register index in the documentation, and returns a dictionary of name-Register pairs using the Register.get_chip_registers static method. The example below retrieves the register index from the table above.\u00bb> MYADC_regs = Register.get_chip_registers(\u2018MyADC\u2019)Register object holds all values from the spreadsheet for the data field it represents. A guide for creating a register index is located in the documentation. An example register is shown below.Each \u00bb> print(MYADC_regs[\u2018RESULT\u2019])0\u00d70[0:11]\u00bb> MYADC_regs[\u2018RESULT\u2019].__dict__{\u2018address\u2019: 0, \u2018default\u2019: 15, \u2018bit_index_high\u2019: 11, \u2018bit_index_low\u2019: 0, \u2018bit_width\u2019: 12}register abstraction of pyripherals allows user code to refer to data fields using only their names. The spreadsheet organization of data fields allows for user-friendly editing and sharing of data field information without the need to change user code. Specific applications include communicating with microcontrollers or development boards like Arduino as well as accessing data using SPI or I2C controllers.The pyripherals uses registers to assemble messages for SPI and a bit_width of 32 that adds 7 to the address every time it is advanced.// bit_width=32 addr_step=7`define MYADC_WRITE_IN_GEN_ADDR 8\u2019h04 WRITE_IN is data from the host computer into the FPGA that is destined for the ADC chip. More information on the syntax and meaning of these lines is available in the endpoint definitions guide.The naming convention for endpoints that contain addresses or bit indices is demonstrated below with curly brackets {} indicating placeholders to be completed by the user. Endpoint directions are from the perspective of the FPGA so Addresses:`define {CHIPNAME}_{ENDPOINT_NAME}{_GEN_ADDR} {hexadecimal address} // bit_width={bit_width} addr_step={addr_step}Bit Indices:`define {CHIPNAME}_{ENDPOINT_NAME}{_GEN_BIT} {decimal bit index} // addr={address or endpoint name} bit_width={bit_width}Note: the above comments must be placed on the same line as the \u2018define. They are split here for readability.create_chips method which instantiates a specified number of chips, incrementing the endpoint addresses and bit indices according to the GEN_ADDR, GEN_BIT, bit_width, and addr_step parameters above.For multiple units of the same chip, each chip class has a endpoint.get_chip_endpoints which returns a dictionary of name-Endpoint pairs. An example using the \u201cMYADC_WRITE_IN\u201d endpoint from earlier is shown below.Once created, the user can read the endpoint definitions file with \u00bb> MYADC_eps = Endpoint.get_chip_endpoints(chip_name=\u2019MYADC\u2019)\u00bb> print(MYADC_eps[\u2018WRITE_IN\u2019])0\u00d74[None:None]\u00bb> MYADC_eps[\u2018WRITE_IN\u2019].__dict__{\u2018address\u2019: 4, \u2018bit_index_low\u2019: None, \u2018bit_index_high\u2019: None, \u2018bit_width\u2019: 32, \u2018gen_bit\u2019: False, \u2018gen_address\u2019: False, \u2018addr_step\u2019: 7}endpoint class in pyripherals extends the capabilities of the Opal Kelly FrontPanel API by automatically linking the Python and Verilog endpoint data with a shared definitions file. With pyripherals, when the user changes the value of an endpoint in the definitions file the change is reflected in both the Python and Verilog code.The pyripherals is available at https://github.com/lucask07/covg_fpga/. It is written for the Opal Kelly XEM7310 FPGA and supports I2C, SPI, and LVDS communication with a DDR for data buffering. An example use of this code is an impedance analyzer using a DAC80508 digital-to-analog converter (Our FPGA code for use with onverter and an Aonverter communichttps://pyripherals.readthedocs.io/en/latest/index.html and the GitHub is available at https://github.com/Ajstros/pyripherals. pyripherals is available for install from pip at https://pypi.org/project/pyripherals/.Documentation is available at pyripherals was developed under an NIH-funded project to create a digital ion channel amplifier at the University of St. Thomas where it is being used to communicate with and control an FPGA-based data acquisition system for real-time feedback."} +{"text": "Endoscopic submucosal dissection (ESD) as a treatment for superficial pharyngeal cancer has been developed and widely accepted by endoscopists in JapanVideo\u20061\u2002A nerve-preserving strategy for endoscopic submucosal dissection of superficial pharyngeal cancers.ESD was performed under general anesthesia. Narrow-band imaging (NBI) and Lugol chromoendoscopy clearly revealed the lesion . We perEndoscopy_UCTN_Code_TTT_1AO_2AG"} +{"text": "A 51-year-old man was referred to our institution with persistent iron deficiency anemia. Initial gastroduodenoscopy and colonoscopy at his local hospital were unremarkable. A subsequent small bowel capsule endoscopy revealed a distal small bowel polyp with evidence of fresh bleeding . A tripDuring intraoperative enteroscopy, an actively bleeding polyp was detected in the distal ileum , and a Video\u20061\u2002The intraoperative enteroscopy procedure.Obscure gastrointestinal bleeding (OGIB) is a challenging condition that accounts for nearly 5\u200a% of all gastrointestinal bleeding casesEndoscopy_UCTN_Code_CCL_1AC_2AC"} +{"text": "These methods thus are hard to apply on real-world datasets (like ImageNet) since there are no such pre-defined attributes in the data environment. The latest works have explored to use semantic-rich knowledge graphs (such as WordNet) to substitute pre-defined attributes. However, these methods encounter a serious \u201crole=\u201cpresentation\u201d>domain shift\u201d problem because such a knowledge graph cannot provide detailed enough semantics to describe fine-grained information. To this end, we propose a semantic-visual shared knowledge graph (SVKG) to enhance the detailed information for zero-shot learning. SVKG represents high-level information by using semantic embedding but describes fine-grained information by using visual features. These visual features can be directly extracted from real-world images to substitute pre-defined attributes. A multi-modals graph convolution network is also proposed to transfer SVKG into graph representations that can be used for downstream zero-shot learning tasks. Experimental results on the real-world datasets without pre-defined attributes demonstrate the effectiveness of our method and show the benefits of the proposed. Our method obtains a +2.8%, +0.5%, and +0.2% increase compared with the state-of-the-art in 2-hops, 3-hops, and all divisions relatively.Almost all existing zero-shot learning methods work only on benchmark datasets ( In recent years, zero-shot learning has attracted widespread attention in computer vision and machine learning areas. It aims to predict new classes that appear during the training process. The base idea of zero-shot learning is to use labeled semantic information to learn a projection between semantic space and visual space. Then, visual samples from new classes can be projected into semantic space to match the attributes to decide the corresponding classifications. In the past five years, a large number of methods for zero-shot learning have been proposed based on this idea \u00a0.etc. Moreover, these datasets are quite small, only tens or hundreds classes and tens of thousands samples, which are far away from real-world data environment, normally tens of thousands classes and millions samples. Thus, existing zero-shot methods are hard to be applied in real-world data environments such as ImageNet which does not provide any pre-defined attributes for any classes.However, these zero-shot learning methods inevitably need to follow a major premise that semantic attributes should be pre-defined and labeled in the datasets. Almost all recent zero-shot learning works are only evaluated on six small benchmark datasets . etc.) to substitute pre-defined attributes in semantics and have better generalization power in semantic representation. However, these GCN-based zero-shot learning methods for image classification on the ImageNet dataset, like GCNZ (etc.) without fine-grained semantics . Thus, the core question becomes \u201ccan a knowledge graph provide fine-grained semantics or even more for zero-shot learning?\u201d.In the past four years, the graph neural network (GNN) has been adopted by zero-shot learning, which makes zero-shot learning tasks applicable in real-world data environments . This isike GCNZ , DGP . At last, a multi-modals GCN network is proposed to embed (SVKG) into graph representations that can be used in zero-shot learning tasks, as showed in Based on the idea, we first scan all the image samples of seen classes in ImageNet-1K to extract parts of visual features for each seen class. Then, WordNet nodes and relations are extracted according to the seen and unseen class labels in ImageNet-1K and embedded as semantic features by a word embedding model. After that, parts of visual features and WordNet semantic features are connected together as a semantic-visual shared knowledge graph , then learn the semantic attribute characteristics of visual objects, and finally judge whether the attribute combinations are satisfied by the visual objects.Attribute-based methods account for the largest proportion of zero-shot learning (ZSL) research since ZSL was first proposed in DAP in 2009 Following the design principles of AwA dataset, a series of benchmark datasets with pre-defined attributes are established today, including AwA2, CUB, SUN, FLO and aPY. Based on these benchmark datasets, ZSL has a blowout development with several milestones.In 2013, with the development of semantic embedding technology, the first milestone of ZSL was the ALE model proposed by In 2017, with the development of deep learning technology in the field of visual computing, the second milestone of ZSL is the SAE model proposed by In 2018, with the remarkable performance of Generative Adversarial Networks (GAN) in imageRecently, some researchers have started to investigate how to mix various types of semantics to address further \u201cdomain shift\u201d issues. For instance, However, the above methods use pre-defined properties as their primary semantic source. As a result, these methods are essentially restricted to benchmark datasets with pre-defined features. The applicability of these methods in real-world environments devoid of pre-defined features is restricted to a small number of cases.Knowledge graph (KG) actually is a third part knowledge base that can provide semantic information for semantic-to-visual transformation in zero-shot learning. Knowledge graph (KG) actually constitutes a third-party knowledge base that can provide semantic information for semantic-to-visual transformation in zero-shot learning. Thus, KG is intuitively considered a substitution for pre-defined attributes. Benefiting from the development of GNN, GCNZ creates However, compared to the pre-defined features in the six benchmark datasets, the knowledge network used in GNN-based approaches providesTo enhance the fine-grained semantic information for existing KG, the latest research tries to apply \u201cvisual knowledge\u201d in zero-shot learning. However, the above methods continue to see visual semantics as auxiliary or augmented semantics. In contrast to them, we want to combine semantic and visual representations into a single common knowledge graph, so-called visual-semantic shared knowledge graph.X\u00a0\u2208\u00a0\u211dN\u00d7F be the word-embedding set for all the nodes in A\u00a0\u2208\u00a0\u211dN\u00d7N be the adjacency matrix transferred from F is the dimensions of the embedding. After, a graph representation model g\u0398 is learned to transfer X to the graph node representation H\u00a0\u2208\u00a0\u211dN\u00d7F\u2032 supervised by Iseen. Here, Iseen\u00a0\u2208\u00a0\u211dN\u00d7F\u2032\u2032 is the seen image feature set extracted by a CNN model. At last, an image classification model Iunseen to the particular class according to the graph node representation H. Here, Iunseen\u00a0\u2208\u00a0\u211dN\u00d7F\u2032\u2032 is the unseen image feature set extracted by a CNN model.Knowledge graph-based Zero-Shot learning uses the knowledge-aided method to learn an image classification model from seen classes to predict unseen classes. First, given a knowledge graph SVKG) is a multi-modal graph that contains both semantic embedding and visual embedding in the same graph. Let X represents the embedding set of the graph nodes with Xs denotes Word-Embedding nodes and Xv represents CNN-Embedding nodes. A represents the edge set while As denotes the edges among Word-Embedding nodes and Av denotes the edges among CNN-Embedding nodes. SVKG is defined in Algorithm 1.Semantic-visual shared knowledge graph ,\u00a0seen\u00a0images Output:\u00a0SV\u00a0KG \u00a01:\u00a0\u00a0Xs\u00a0\u2190\u2212\u00a0Put\u00a0X\u00a0into\u00a0Glove \u00a02:\u00a0\u00a0As\u00a0\u2190\u2212\u00a0Put\u00a0X\u00a0into\u00a0WordNet\u00a0hyponym/hypernym \u00a03:\u00a0\u00a0Xv\u00a0\u2190\u2212\u00a0Put\u00a0seen\u00a0images\u00a0into\u00a0EfficientDet \u00a04:\u00a0\u00a0As\u00a0\u2190\u2212\u00a0Put\u00a0Xv\u00a0connecte\u00a0with\u00a0semantic\u00a0object\u00a0node \u00a05:\u00a0\u00a0X\u2032\u00a0\u2190\u2212\u00a0Xs\u00a0\u222a\u00a0Xv \u00a06:\u00a0\u00a0A\u2032\u00a0\u2190\u2212\u00a0As\u00a0\u222a\u00a0Av \u00a07:\u00a0\u00a0SV\u00a0KG\u00a0\u2190\u2212{\u00a0X\u2032,\u00a0A\u2032\u00a0} \u00a08:\u00a0\u00a0return\u00a0\u00a0SV\u00a0KG \u00a0_______________________________________________________________________________ SVKG, the semantic node set Xs is constructed from WordNet (https://wordnet.princeton.edu/) Noun words and embedded into word features by Glove (https://nlp.stanford.edu/projects/glove/). The edge matrix As is established by WordNet hyponym/hypernym links among these words. The visual node set Xv is obtained by using EfficientDet (etc.) from ImageNet-1k. Then, these visual nodes are connected to their semantic object node by the edge matrix Av . An example of SVKG about bird Finch is presented in In cientDet to detecXs needs to be transferred into semantic-visual shared space H. However, the visual feature space Xv might cause interference during Xs\u27f6H transfer process, leading to a serious over-fitting problem. Empirically, many data augmentation methods have shown to improve the generalization and robustness of the learning model . In SVKGaug, there are two feature modals. One is semantic feature modal {Xs generated from Glove-Embedding. The other is visual feature modal Xv extracted by EfficientDet. Thus, multi-modals graph representation learning is aimed to transfer both Xs and Xv into a semantic-visual shared feature modal H.As explained in the \u2018Problem Definition\u2019 section, knowledge graph Xs and Xv into H, as defined in g\u0398 is a Graph Convolutional Network (GCN) employed from D is the diagonal degree matrix of A matrix, where Di,i). Then, Iunseen is input into the trained classification model to calculate the similarities with all graph features H\u2032. To obtain the predicting result, Resultset records all the similarities between H\u2032 and Iunseen, and the labeli of corresponding In the predicting process, we first use the same Resnet-50 model to extract unseen image feature ________________________________________________________________________________________ Algorithm\u00a02\u00a0Zero-shot\u00a0Predicting\u00a0Process _______________________________________________________________________________________ Input:\u00a0H\u2032,\u00a0Iunseen Output:\u00a0Top-k\u00a0result \u00a01:\u00a0Resultset\u00a0\u2190\u2212{\u2205,\u00a0\u2205} \u00a02:\u00a0for\u00a0H\u2032i\u00a0in\u00a0H\u2032\u00a0do \u00a03:\u00a0\u00a0\u00a0Simi\u00a0\u2190\u2212\u00a0H\u2032i\u00a0\u22c5\u00a0Iunseen \u00a04:\u00a0\u00a0\u00a0lableli\u00a0\u2190\u2212\u00a0findLabel(H\u2032i) \u00a05:\u00a0\u00a0\u00a0Resultset\u00a0\u2190\u2212{Simi,\u00a0lableli} \u00a06:\u00a0end\u00a0for \u00a07:\u00a0Resultset\u00a0\u2190\u2212\u00a0SortBySim(Resultset) \u00a08:\u00a0Resultset\u00a0\u2190\u2212\u00a0Top-k(Resultset) \u00a09:\u00a0return\u00a0\u00a0Resultset \u00a0____________________________________________________________________________________ Iunseen is not used during the whole training process. In the predicting process, the input Iunseen is the first time the model touches the unseen images.It can be observed, the unseen image feature SVKG. At last, extra downstream tasks and observations are discussed.In this section, the datasets and evaluation metrics are introduced first. Then, the implementation details about the model settings are explained. After, the state-of-the-arts comparisons are presented. Then, an ablation study is conducted for the proposed nth jumps to connect to other classes of ImageNet through the relations of a WordNet graph. For example, as showed in The experiments are conducted on ImageNet , which iXv in SVKG, a \u201cDetailed\u201d group which contains four subsets are divided from ImageNet-1K and corresponding parts visual features Xv for all these detailed categories are extracted from their image samples by EfficientDet. In a short, the experiments involve two groups and seven divisions divided by ImageNet. The detailed information for each corresponding division is shown in In order to show the effectiveness of visual feature There, the label rate demonstrates the difficulty of the corresponding division. This is because with the reduction of supervision in semi-supervised manner, there are higher requirements for the generalization and robustness of the model. The performance of corresponding division will also degrade as the supervised label rate decreases. Therefore, the accuracy in widely different divisions can reflect the robustness of the model. As can be seen from The proposed method is evaluated according to Generalized Zero-Shot Learning (GZSL) setting, which is the most challenging evaluation metric for zero-shot learning. In GZSL, all classes, no matter seen or unseen classes, are all considered as the candidate classes in the testing, while the test samples are only from unseen classes. We adopt the same train/test splits and the Top-k Hit Ratio (Hit@k) metric, in accordance with In the model implementations, Resnet-50 is used In this section, all the comparisons conducted in both General and Detailed groups follow the GZSL setting. For General group, DeViSE , ConSE , ConSE2 General group comparisons: From But from Top-2 to Top-20 accuracy evaluations, the latest method CL-DKG surpasses our method. The reason might be the fine-grained features strengthen discrimination ability but weaken, to some extent, the generalization ability of the model. However, it can be seen from Detailed group comparisons: The most significant contribution of the proposed method is that fine-grained visual features are shared with semantic features in the same knowledge graph. Here, Detail comparisons are aimed to show the effects of such shared visual features through some representative categories of ImageNet. It can be seen from From SVKG mainly consists of Xs, Xv and Xaug. Thus, four combinations, \u201cXs\u201d only, \u201cXs+ Xv\u201d, \u201cXs+ Xaug\u201d and \u201cAll\u201d, for ablation study are established. Indeed, \u201cXs\u201d denotes the traditional knowledge graph while \u201cAll\u201d represents the proposed knowledge graph SVKGaug. The experiments are conducted also on the Detailed group to 17.7% (All). The reason is that dog is a well-known category and widely used everywhere. This leads the upstream feature extractor, EfficientDet, to work well and extract more accurate part visual features for SVKG that significantly improves the Top-1 accuracy. A similar phenomenon is also observed in bird and primate categories. The study thus also reveals that the better performance of the upstream feature extractor, the better accuracy the proposed SVKG might achieve. In a short, the study demonstrates the effectiveness of each proposed module in real-world zero-shot learning tasks.As defined in ed group to evaluXs+ Xaug\u201d. The influence of graph augmentation Xaug is weakened after two or three hops, and distant nodes are easy to overfit with the presence of part visual nodes. Therefore, in the long-distance prediction, the overall performance of the model decreases.In the cases from Top-5 to Top-20, the best performance is \u201c For further discussion, we conducted an unseen class search compared with DGP on Detailed group. Some representative results are presented in SVKG graph on dog category. The orange-colored nodes represent seen classes and the blue-colored nodes represent unseen classes. SVKG. Intuitively, SVKG is more structured and order than the graph features of GCNZ and DGP. It can be observed that blue-colored nodes distribute among orange-colored nodes reveals the hidden relationships between seen and unseen classes. This is because the proposed visual features Xv lead the graph representations of SVKG close to real-world dog taxonomy. Unseen class nodes thus are distributed to the most related seen class nodes by the meaning of taxonomy. It obviously alleviates coincidence and proximity during unseen class predicting that improves the performance of zero-shot learning.Meanwhile, t-SNE was used to visualize knowledge graph features for initial graph, GCNZ graph, DGP graph and SVKG with GCNZ graph feature distribution and real image feature distribution, as shown in SVKG is significantly closer to the real image feature distribution than GCNZ. This indicates the feature of SVKG is more close to the real visual feature that is easier to be matched with unseen classes. This also explains why the proposed method achieves the best accuracy on Top-1 and Top-2 unseen class predicting.Moreover, we also compared feature distribution of SVKG) for zero-shot learning on the real-world dataset without needing pre-defined attributes. It combines semantic (from WordNet and Glove embedding) and visual features (extracted from raw images by EfficientDet) together in the same graph. The visual feature provides detailed information for describing fine-grained semantics that alleviates the \u201cDomain-Shift\u201d problem during the semantic-to-visual transformation of zero-shot learning. A novel multi-modal GCN model is also proposed to learn the graph representations of SVKG. After, the graph representations are further used for downstream zero-shot learning tasks in the experiments.In this article, we propose a semantic-visual shared knowledge graph (Experimental results on the real-world dataset demonstrate the effectiveness of our method and illustrate the multi-modal graph guide model generates more discriminative representation. And our method significantly surpasses other methods on Top-1 to Top-5 accuracy evaluations in the bird, snake, primate, and dog categories. Especially, on Top-1 accuracy evaluations, our method even achieves a 2-3 times increase compared with the state-of-the-art.SVKG is only storing fine-grained information in parts of visual features. In the future, we will add color, material, shape, and other relations and associated nodes to the SVKG in order to further increase the model\u2019s performance.The important component of zero-shot learning tasks implemented in real-world environments is still how to reasonably use and construct the knowledge graph. In this paper, the 10.7717/peerj-cs.1260/supp-1Supplemental Information 1Click here for additional data file.10.7717/peerj-cs.1260/supp-2Supplemental Information 2Click here for additional data file."} +{"text": "The term \u201cRNA-seq\u201d refers to a collection of assays based on sequencing experiments that involve quantifying RNA species from bulk tissue, from single cells, or from single nuclei. The kallisto, bustools, and kb-python programs are free, open-source software tools for performing this analysis that together can produce gene expression quantification from raw sequencing reads. The quantifications can be individualized for multiple cells, multiple samples, or both. Additionally, these tools allow gene expression values to be classified as originating from nascent RNA species or mature RNA species, making this workflow amenable to both cell-based and nucleus-based assays. This protocol describes in detail how to use kallisto and bustools in conjunction with a wrapper, kb-python, to preprocess RNA-seq data. While multiple steps are necessary to process input consisting of FASTQ sequencing files, a reference genome FASTA, and a GTF annotation28, to an output of quantifications using kallisto and bustools, these steps are greatly facilitated by the wrapper tool, kb-python. kb-python can extract reference transcriptomes from reference genomes and run kallisto and bustools in workflows optimal for each assay type. The kb-python tool simplifies the running of kallisto and bustools to the extent that all of this can be done in two steps: `kb ref` for generating a kallisto index from an annotated reference genome and `kb count` for mapping and quantification. Thus, kallisto, bustools, and kb-python make RNA-seq preprocessing efficient, modular, flexible, and simple.1The preprocessingindex from a set of sequences, referred to as targets, representing the set of sequences that RNA-seq reads can be mapped to. In a standard analysis, these targets are usually transcript sequences . However, more generally, users can define targets from any sets of sequences they wish to map their sequencing reads against. Since kallisto is a tool that leverages pseudoalignment, the mapping procedure relies on read assignment, such that each read is deemed to be compatible with a certain set of targets, rather than standard alignment. The kallisto index is based on the Bifrost29 implementation of the colored de Bruijn graph30, which enables memory-efficient and rapid read assignment.For RNA-seq read mapping, kallisto builds an kb ref command . The index created by --workflow=nac (nac: nascent and cDNA) contains both the cDNA and the nascent transcripts. The nascent transcript sequences consist of the full gene (both exons and introns). This nac index is suitable for single-nucleus RNA-seq as there exists a high abundance of non-mature transcripts captured in nucleus-based sequencing assays.32 Additionally, this nac index should be used for analyses that require jointly modeling nascent and mature RNA species.38 For both the standard and nac workflows, a user supplies a genome FASTA and GTF annotation, which kb-python uses to extract the relevant sequences. Finally, if one wishes to index a custom set of targets or of k-mers allows specific biotypes to be selected from the GTF file, making possible filtering of entries such as pseudogenes, which can improve read mapping accuracyry usage . It is rkb count command within kb-python enables mapping and quantification of bulk, single-cell, and single-nucleus RNA-seq reads (The eq reads . As diffThe specifications for sequencing assay technology within kb-python are as follows:Technology string: A technology string for a particular type of assay can be supplied via the -x option. The technology string can be used in one of three ways:kb --list) so one can name one of those directly (e.g. one can specify -x 10xv3).Option 1: Several assays are predefined within the software , and the biological sequence that is to be mapped .Strandedness: If a read (or the first read in the case of paired-end reads) is to be mapped in forward orientation, one should specify --strand=forward. If it is to be mapped in reverse orientation, one should specify --strand=reverse. If one does not want to map reads with strand-specificity, then one should specify --strand=unstranded. If a predefined name is used in the technology string -x option (option 1), then kb-python uses a default stranded option for that technology ; otherwise, the default is unstranded. Setting the --strand option explicitly will overrule the default option.Parity: If the technology involves two biological read files that are derived from paired-end sequencing (as is the case with Smartseq243 and Smartseq344 and many bulk RNA sequencing kits), one should specify --parity=paired to perform mapping that takes into account the fact that the reads are paired-end. Otherwise, one can specify --parity=single. If a predefined name is used in the -x technology string option (option 1), then kb-python uses the default parity option for that technology .On list: For single-cell and single-nucleus sequencing assays, barcodes are used to identify each cell or nucleus. The \u201con list\u201d of barcodes represents the known barcode sequences that are included in the assay. Barcodes extracted from the sequencing reads will be error-tolerantly mapped to this list in a process known as barcode error correction. The on list filename can be specified with the -w option in kb count. It can also be obtained by seqspec.40 If an on list is not provided or cannot be found for the given technology, then an on list is created by bustools via the bustools allowlist command which identifies repeating barcodes in sequencing reads. If the technology does not include cell barcodes (as is the case in bulk RNA-seq), the \u201con list\u201d option is irrelevant and no barcode processing occurs which should be the case for assays that don\u2019t include cell/nuclei barcodes . If a predefined name is used in the -x technology string option (option 1), then kb-python uses the default on list option for that technology.--workflow=nac should be used in kb count so that the nascent and mature RNA species are quantified accurately; otherwise that option should be omitted or --workflow=standard (which is the default) can be explicitly specified. For the nac workflow, one obtains three count matrices: 1) nascent, 2) mature, and 3) ambiguous. In most experiments, the plurality of reads will be \u201cambiguous\u201d since they originate from exons, which are present in both nascent RNA and mature RNA. Therefore, it is desirable to generate additional matrices by adding the counts from those three matrices, which users can either do themselves or by using the --sum option.24--sum=total adds all three matrices, --sum=cell adds the mature and ambiguous matrices, and --sum=nucleus adds the nascent and ambiguous matrices. Different matrices may be used for different types of analyses. For example, in single-cell RNA-seq analysis (where most \u201cambiguous\u201d counts are likely of mature RNA origin), jointly modeling the mature+ambiguous count matrix (--sum=cell) with the nascent count matrix permits biophysical modeling of RNA processing.38 In single-nucleus RNA-seq quantification, one might want to use --sum=nucleus to add up the nascent+ambiguous counts. The kb-python, kallisto, and bustools commands for the standard workflow and the nac workflow are shown in If a nac index was generated by kb ref, 48, sleuth49, limma-voom51, and other differential gene expression programs.In addition to single-cell and single-nucleus RNA-seq, kb count can be used for bulk RNA-seq. Bulk RNA-seq generally does not have UMIs or cell barcodes and relies on cDNA mapping. With -x BULK as the technology string, a workflow specific for bulk RNA-seq quantification is executed . This wiTo facilitate multi-sample analysis, artificial unique sample-specific barcodes can be created and stored in the BUS file and the resulting mapping between the artificially generated barcode and the sample ID is outputted. These sample-specific barcodes are 16-bp in length and are also stored in the BUS file. Where there exists both a cell barcode (like in single-cell RNA-seq) and a sample-specific barcode, both sets of barcodes will be outputted so that each entry in the resulting output count matrix can be associated with a particular cell and a particular sample. To utilize the multi-sample workflow, a batch file containing the file names of the FASTQ files must be provided .--dry-run option in kb count outputs the kallisto and bustools commands that will be run without actually running the programs. Also, the --verbose option in kb count is helpful for examining the kallisto and bustools commands that are being run as well as their output.The technical details of how kb count utilizes kallisto and bustools are detailed in the following paragraph. Note that the kallisto bus command within kallisto to produce a BUS file, which stores the read mapping information, and then uses bustools23 commands to process the BUS file. The kallisto bus command maps RNA-seq reads to a kallisto index, and the resultant BUS file stores the mapping information, including the barcode, unique molecular identifier (UMI), and the equivalence class representing the set of transcripts the read is compatible with.23 In certain RNA-seq assays, barcodes and/or UMIs may not be present, and are therefore not considered when processing the BUS file. After the mapping step is complete, the BUS file is sorted via the bustools sort command to facilitate further processing. For single-cell and single-nucleus experiments with multiplexed barcodes in the sequencing reads, an \u201con list\u201d of barcodes, representing the known barcode sequences that are included in the assay, needs to be provided. If an \u201con list\u201d is unavailable, the bustools allowlist command can be used to construct one from a sorted BUS file. The barcodes in the sorted BUS file are error-corrected to the \u201con list\u201d via bustools correct, then the BUS file is sorted again with bustools sort. The final sorted, on list-corrected BUS file is then used to generate quantifications via count matrices through the bustools count command. At any point, a sorted BUS file can be inputted into bustools compress to create a compressed BUS file (a BUSZ file), which can be subsequently decompressed via bustools decompress.52 Other bustools features enable more specialized workflows beyond what is provided by kb-python , one can supply the --cm option for quantification.Gene-level count matrices: In single-cell and single-nucleus RNA-seq, typically a gene-level count matrix is produced by collapsing UMIs to the gene level. Here, the bustools count command is run with the --genecounts option is not provided, and --multimappingis provided to avoid discarding reads or collapsed UMIs that are assigned to multiple genes. If UMIs are not present in the sequencing technology, the --cm option is supplied to perform counting without UMI collapsing. While downstream analyses can be performed on TCCs55, it is more often useful to produce transcript-level abundances from the TCCs for technologies where sequencing reads span the full length of transcripts, such as bulk RNA-seq. In such cases, an expectation-maximization algorithm is typically performed to probabilistically estimate transcript abundances.56 The procedure to generate transcript-level abundance matrices is performed by running the kallisto quant-tcc command on the TCC matrices.Transcript-level count matrices: Transcript-compatibility counts (TCCs) are counts assigned to equivalence classes where each equivalence class is defined by a unique set of transcripts. For producing a matrix of transcript-compatibility counts (TCCs), the 40 provides a specification for the structure of genomic sequencing assays, formatted in a machine-readable YAML file. The specification can be readily inputted into the kallisto bustools workflow for preprocessing reads from a given assay being the matrix columns.Here, the quantification output of the kb count command is described. While the initial step of kb count uses kallisto to produce a BUS file located at output_dir/output.bus, the actual quantification results are located in matrices in subdirectories of output_dir/. All matrices have the extension .mtx and will be in a sparse matrix (Matrix Market) file format with the barcodes (i.e. the cells or samples) being the matrix rows and the genes :The standard workflowcells_x_genes.mtx: The count matrix (in Matrix Market file format); only exonic reads are countedcells_x_genes.barcodes.txt: The barcodes representing the matrix row namescells_x_genes.genes.txt: The gene IDs representing the matrix column namescells_x_genes.genes.names.txt: Same as cells_x_genes.mtx except with gene names instead of gene IDs for the matrix columnscells_x_genes.barcodes.prefix.txt: If sample-specific barcodes are generated in addition to cell barcodes being recorded, then this file will be created and the sample-specific barcodes will be stored here. The lines of this file correspond to the lines in the cells_x_genes.barcodes.txt which contains the cell barcodes (both files will have the same number of lines). The sample-specific barcodes and cell barcodes can be joined together as a unique identifier for downstream analysis.nac workflow: same as the standard workflow except the .mtx files produced are differentcells_x_genes.mature.mtx: The mature RNA count matrixcells_x_genes.ambiguous.mtx: The nascent RNA count matrixcells_x_genes.nascent.mtx: The ambiguous RNA count matrixcells_x_genes.cell.mtx: The mature+ambiguous RNA count matrix (note: this is what is quantified into the count matrix in the standard workflow)cells_x_genes.nucleus.mtx: The nascent+ambiguous RNA count matrixcells_x_genes.total.mtx: The mature+nascent+ambiguous RNA count matrix--tcc option is used.For RNA-seq assays (e.g. bulk RNA-seq or Smartseq2) that profile the full length of transcripts in which case it is desirable to perform transcript-level quantification, the output_dir/counts_unfiltered/ which contains the following files for the standard workflow:The first step to doing transcript-level quantification is to obtain transcript-compatibility counts (TCCs) over equivalence classes (ECs). The TCCs will be outputted into cells_x_tcc.mtx: The count matrix containing the TCCscells_x_tcc.barcodes.txt: The barcodes representing the matrix row namescells_x_tcc.ec.txt: The equivalence classes representing the matrix column names for all transcripts within the equivalence class)output_dir/quant_unfiltered/ directory which contains the following:The --tcc option will additionally produce transcript-level estimated counts which will be placed in the matrix.abundance.mtx: The matrix containing the transcript-level estimated countsmatrix.abundance.tpm.mtx: The matrix containing the TPM-normalized transcript-level abundancesmatrix.efflens.mtx: A matrix that contains the transcript effective lengthsmatrix.fld.tsv: A file with two columns, containing the mean and standard deviation, respectively, of the fragment length distribution used to produce transcript-level abundances and effective lengths for each row of the matrix.matrix.abundance.gene.mtx: A matrix that is the same as the matrix.abundance.mtx matrix except counts are aggregated to gene-levelmatrix.abundance.gene.tpm.mtx: A matrix that is the same as the matrix.abundance.tpm.mtx matrix except TPMs are aggregated to gene-leveltranscripts.txt: The transcript names representing the matrix column names for the transcript-level quantification matricesgenes.txt: The gene IDs representing the matrix column names for the gene-level aggregation quantification matricestranscript_lengths.txt: The transcript names along with their lengthsrow names are the individual samples and will be the same as those in output_dir/counts_unfiltered/cells_x_tcc.barcodes.txt - The output_dir/matrix.cells and output_dir/matrix.sample.barcodes files provide a mapping between the sample name and the sample barcode.*Note: The *Note: The --matrix-to-directories option will output each row of the matrix into a separate subdirectory. In other words, using this option will produce multiple new directories within output_dir/quant_unfiltered/. Each one will be named abundance_{n} . Within each subdirectory, an abundance.tsv text file and abundance.h5 HDF5 file will be created containing the quantifications for that particular sample. These abundance files are identical to the abundance files produced by the original version of kallisto for bulk RNA-seq.60 for downstream processing in python, an anndata61 object needs to be created , the matrices that are loaded in will become much smaller and more efficient to process.A 64-bit computer running either macOS, Windows, or a Linux/Unix operating system.kallisto version 0.50.1 or later (which comes packaged with kb-python)bustools version 0.43.1 or later (which comes packaged with kb-python)kb-python version 0.28.0 or laterPython 3.7 or later (for kb-python version 0.28.0)Bulk, single-cell, or single-nucleus RNA sequencing reads in (possibly gzip) FASTQ format.unfiltered GTF file) and an hour to generate the nac index. For the preprocessing of 800 million Illumina sequencing reads (stored in a single pair of fastq.gz files) produced by single-cell RNA-seq from 10x Genomics, kb count with the nac workflow can take under an hour on 8 threads and under 40 minutes on 16 threads, with an even lower runtime for the standard workflow.The runtime depends on the size of the reference being indexed, the number and length of the sequencing reads being processed, other properties of the dataset being quantified, system hardware, and the number of threads allotted. The kb ref command only needs to be run once to create the index against which reads will be mapped. With 8 threads on a server with x86-64 architecture and 32 Intel Xeon CPUs (E5-2667 v3 @ 3.20GHz), kb ref takes approximately 15 minutes to generate a standard index from the GRCm39 mouse genome file created by kb ref should be the exact file used by kb count when running kb count on that index. All the transcripts in the t2g file must be exactly the same as the transcripts present in the kallisto index. Incompatibilities can lead to unpredictable behavior in the bustools quantification step.31 .dna.toplevel.fa.gz files or the GENCODE67 .primary_assembly.genome.fa.gz files should be used as the reference genome. Use of FASTA files incompatible with the supplied GTF will lead to errors.When using kb ref to generate a kallisto index, a genome FASTA file (not a transcriptome FASTA file) should be supplied along with the genome annotation GTF file. A transcriptome file will automatically be generated by kb ref and be indexed by kallisto. In general, the EnsemblWhen performing multiple kb-python runs simultaneously, a different temporary directory must be specified via the --tmp option for each run. The temporary directory also must not exist beforehand.Finally, one should make sure that the value supplied to the -x technology string option matches the assay from which the sequencing reads were generated.Note: If the technology string begins with a \u2212, for example: \u22121,0,0:0,0,5:0,5,0, one would need to write \u2212x \u201c \u22121,0,0:0,0,5:0,5,0\u201d to avoid the string being misinterpreted as a command-line flag.Here, we describe the procedures to use for mouse samples of paired-end bulk RNA-seq, 10x (version 3) single-cell RNA-seq, and 10x (version 3) single-nucleus RNA-seq.Input:Paired-end unstranded mouse RNA-seq reads (3 samples):sample1_R1.fastq.gz sample1_R2.fastq.gzsample2_R1.fastq.gz sample2_R2.fastq.gzsample3_R1.fastq.gz sample3_R2.fastq.gz1.pip install kb_python2.ftp.ensembl.org/pub/release-108/fasta/mus_musculus/dna/Mus_musculus.GRCm39.dna.primary_assembly.fa.gzwget ftp.ensembl.org/pub/release-108/gtf/mus_musculus/Mus_musculus.GRCm39.108.gtf.gzwget 3.kb ref -i index.idx -g t2g.txt -f1 cdna.fasta \\\u2009\u2009\u2009\u2009Mus_musculus.GRCm39.dna.primary_assembly.fa.gz \\\u2009\u2009\u2009\u2009Mus_musculus.GRCm39.108.gtf.gz4.kb count -x BULK -o output_dir -i index.idx -g t2g.txt \\\u2009\u2009\u2009\u2009--parity=paired --strand=unstranded \\\u2009\u2009\u2009\u2009--tcc --matrix-to-directories \\\u2009\u2009\u2009\u2009sample1_R1.fastq.gz sample1_R2.fastq.gz \\\u2009\u2009\u2009\u2009sample2_R1.fastq.gz sample2_R2.fastq.gz \\\u2009\u2009\u2009\u2009sample3_R1.fastq.gz sample3_R2.fastq.gz5.Output for sample 1:output_dir/quant_unfiltered/abundance_1/abundance.tsvoutput_dir/quant_unfiltered/abundance_1/abundance.gene.tsvoutput_dir/quant_unfiltered/abundance_1/abundance.h5Output for sample 2:output_dir/quant_unfiltered/abundance_2/abundance.tsvoutput_dir/quant_unfiltered/abundance_2/abundance.gene.tsvoutput_dir/quant_unfiltered/abundance_2/abundance.h5Output for sample 3:output_dir/quant_unfiltered/abundance_3/abundance.tsvoutput_dir/quant_unfiltered/abundance_3/abundance.gene.tsvoutput_dir/quant_unfiltered/abundance_3/abundance.h5The abundance.tsv files contain the transcript-level abundances. The abundance.h5 file contains the same information as the abundance.tsv files except in HDF5 format. The abundance.gene.tsv files contain the gene-level abundances (taken by summing up the transcript-level abundances for each gene). These files can be used in downstream differential gene expression programs.Input:10x version 3 single-cell RNA-seq reads: R1.fastq.gz and R2.fastq.gz1.pip install kb_python2.ftp.ensembl.org/pub/release-108/fasta/mus_musculus/dna/Mus_musculus.GRCm39.dna.primary_assembly.fa.gzwget ftp.ensembl.org/pub/release-108/gtf/mus_musculus/Mus_musculus.GRCm39.108.gtf.gzwget 3.kb ref -i index.idx -g t2g.txt -f1 cdna.fasta \\\u2009\u2009\u2009\u2009Mus_musculus.GRCm39.dna.primary_assembly.fa.gz \\\u2009\u2009\u2009\u2009Mus_musculus.GRCm39.108.gtf.gz4.kb count -x 10xv3 -o output_dir -i index.idx -g t2g.txt \\\u2009\u2009\u2009\u2009R1.fastq.gz R2.fastq.gz5.Output:output_dir/counts_unfiltered/cells_x_genes.mtxoutput_dir/counts_unfiltered/cells_x_genes.barcodes.txtoutput_dir/counts_unfiltered/cells_x_genes.genes.txtoutput_dir/counts_unfiltered/cells_x_genes.genes.names.txtThe cells_x_genes.mtx is the count matrix file with the barcodes (the row names) listed in cells_x_genes.barcodes.txt and the gene names (the column names) listed in cells_x_genes.genes.names.txt .Input:10x version 3 single-nucleus RNA-seq reads: R1.fastq.gz and R2.fastq.gz1.pip install kb_python2.ftp.ensembl.org/pub/release-108/fasta/mus_musculus/dna/Mus_musculus.GRCm39.dna.primary_assembly.fa.gzwget ftp.ensembl.org/pub/release-108/gtf/mus_musculus/Mus_musculus.GRCm39.108.gtf.gzwget 3.kb ref --workflow=nac -i index.idx -g t2g.txt \\\u2009\u2009\u2009\u2009-c1 cdna.txt -c2 nascent.txt -f1 cdna.fasta -f2 nascent.fasta \\\u2009\u2009\u2009\u2009Mus_musculus.GRCm39.dna.primary_assembly.fa.gz \\\u2009\u2009\u2009\u2009Mus_musculus.GRCm39.108.gtf.gz4.kb count -x 10xv3 --workflow=nac -o output_dir \\\u2009\u2009\u2009\u2009-i index.idx -g t2g.txt -c1 cdna.txt -c2 nascent.txt \\\u2009\u2009\u2009\u2009--sum=total R1.fastq.gz R2.fastq.gz5.Output:output_dir/counts_unfiltered/cells_x_genes.mature.mtxoutput_dir/counts_unfiltered/cells_x_genes.nascent.mtxoutput_dir/counts_unfiltered/cells_x_genes.ambiguous.mtxoutput_dir/counts_unfiltered/cells_x_genes.cell.mtxoutput_dir/counts_unfiltered/cells_x_genes.nucleus.mtxoutput_dir/counts_unfiltered/cells_x_genes.total.mtxoutput_dir/counts_unfiltered/cells_x_genes.barcodes.txtoutput_dir/counts_unfiltered/cells_x_genes.genes.txtoutput_dir/counts_unfiltered/cells_x_genes.genes.names.txtThis workflow can be used for both single-cell RNA-seq and single-nucleus RNA-seq. Many count matrix files (.mtx files) are generated. For quantification of total RNA present in each cell or nucleus, one would want to use the cells_x_genes.total.mtx. For biophysical models that jointly consider spliced and unspliced transcripts, one may want to use cells_x_genes.cell.mtx (for the \u201cspliced\u201d transcripts) and cells_x_genes.nascent.mtx (for the \u201cunspliced\u201d transcripts).The barcodes (the matrix row names) are listed in cells_x_genes.barcodes.txt and the gene names (the matrix column names) are listed in cells_x_genes.genes.names.txt .There are many ways to extend the standard workflows beyond bulk RNA-seq, 10x single-cell RNA-seq, and 10x single-nucleus RNA-seq. For an additional, extended example that involves preprocessing mouse multiplexed single-nucleus SPLiT-seq RNA-seq data with a filtered mouse genome annotation, see Supplement 1"} +{"text": "In mobile system due to limited space mutual coupling between nearby antenna elements is an issue that distort MIMO antenna performance. Defected ground structure is used to control coupling. The defected ground structure has advantages like ease of fabrication, compact size and high efficiency as compare to other techniques. Less than 30dB coupling is achieved for adjacent elements. The -10 dB impedance bandwidth of 700 MHz is achieved for all radiating elements ranging from 3.3 GHz to 4.1 GHz. The value is about 900MHz for -6dB. The proposed antenna offers good results in terms of fundamental antenna parameters like reflection coefficient, transmission coefficient, maximum gain, total efficiency. The antenna achieved average gain more than 3.8dBi and average radiation efficiency more than 80% for single dual polarized element. The antenna provides sufficient radiation coverage in all sides. The MIMO antenna characteristics like diversity gain (DG), envelope correlation coefficient (ECC), total active reflection coefficient (TARC) and channel capacity are calculated and found according to standards. Furthermore, effect of user on antenna performance in data-mode and talk-mode are studied. Proposed design is fabricated and tested in real time. The measured results shows that proposed design can be used in future smartphones applications. The design is compared with some of the existing work and found to be the best one in many parameters and can be used for commercial use.This manuscript presents high performance dual polarized eight-element multiple input multiple output (MIMO) fifth generation (5G) smartphone antenna. The design consists of four dual-polarized microstrip diamond-ring slot antennas, positioned at corners of printed circuit board (PCB). Cheap Fr-4 dielectric with permittivity 4.3 and thickness of 1.6mm is used as substrate with overall dimension of 150 \u00d7 75 \u00d7 1.6 In modern day, the interest in MIMO wireless communication technology is increased due to key features of MIMO technology like highly improve wireless link reliability, transmission capacity and data rates of wireless system through multi-path transmission and reception . For 5G mm3. The optimized value of the proposed MIMO design is listed in below Here, the design of single element dual-polarized antenna and result analysis of simulated and measured results is discussed. In figure below the configuration of dual-polarized antenna is presented. The aim of the research is the design of a small MIMO antenna that can be easily integrated into a smartphone\u2019s PCB with wide band and having dual polarization capabilities and low mutual coupling. This is accomplished by using microstrip patch antenna with slot in ground plane which increase isolation and wide the antenna bandwidth. Slot antenna is used because of its attractive features like compactness, light wight and ease of fabrication and integration with radio frequency circuits. The width of slot determines operating frequency. The circumference of ring needs to match the dielectric wavelength at the operating frequency as The dual polarization of antenna is achieved by placing radiating elements orthogonal to each other so that they orthogonally polarized. Dual polarization also increase isolation between antenna elements as both elements fed differently. Microstrip feed lines are used for feeding the orthogonal antenna elements. Microstrip feeding techniques offers features like simplicity, miniaturization, low cost, planar compatibility, high integration which is focus of this research. The characteristic impedance (Zo) of microstrip feed line can be calculated as \u03b5eff is given by Where effective height of the feed line can be calculated as The configuration of various structures studied in the designing process of proposed dual polarized antenna alongside with their S-parameter are displayed in Reflection coefficient (S11) of various antenna design parameters like Lf, Wf, Wx and x are demonstrated in The photograph of fabricated prototype of proposed antenna design (single element) and the measured reflection and transmission coefficient are given in Snn) and transmission coefficient (Snm). The figure shows that the suggested antenna design for smart mobile phone exhibits good return loss, wide bandwidth, high isolation, low mutual coupling and polarization diversity. Each of the four dual polarized radiating elements offers comparable performance.Smart phone 8 by 8 antenna design on 75mm by 150 mm PCB is illustrated in The photograph of fabricated prototype of 8 by 8 smart phone antenna is illustrated in The measured and simulated 2D polar plots (E & H-plane) of radiation pattern are compared and illustrated in ith and jth element of m by n MIMO antenna can be determined from antennas S-Matrix using equations ECC, DG, TARC and channel capacity (CC) are some crucial factors that need be taken into account in MIMO antennas to find that MIMO system function properly. ECC between ECC of antenna described about how much radiation pattern of two elements of a MIMO system are independent of each other. If on element is vertically polarized and a near-by second element is horizontally polarized then these elements would have a zero correlation between them. Correspondingly, if one antenna only radiated energy towards the ground and the other antenna element has a radiation pattern towards the sky totally opposite the ECC between these antennas would also be zero. Polarization of the antenna, shape of radiation pattern and relative phase of fields between two elements of a MIMO antenna or taken into consideration when calculating ECC of MIMO system. Envelope correlation coefficient of MIMO system can be calculated using S-matrix via An important parameter for assessing performance of MIMO antenna is TARC. It can be calculated from The detailed comparison of proposed work with already present work in literature is given in Effect of human (users) body tissues on proposed design performance is studied considering three common scenarios i.e., Data mode in which two modes single hand mode (SHM) and dual hand mode (DHM) and talking mode (TM). The placement of mobile phone MIMO antenna in different scenarios is illustrated in For 5G MIMO communications, a mobile phone antenna with dual-polarization capabilities is suggested. The antenna layout includes two-port microstrip feed lines with ground plane diamond slots placed at each of the PCB\u2019s four corners. The antenna elements have a broad bandwidth with a 3.6 GHz center frequency. Antenna performance parameters i.e., S-parameters, Efficiency, maximum realized gain, radiation patterns, DG, ECC, TARC and channel capacity are simulated and adequate results are obtained. Antenna has more than 700 MHz bandwidth and radiation efficiency more than 85% with more than 6 dB gain for MIMO configuration. The antenna has exhibits more than 30dB isolation due to orthogonal placement of adjacent element. ECC one of main MIMO antenna parameters is less than 0.001. The antenna\u2019s performance in talk-mode and data-mode scenarios is examined. Also, the proposed design of smart-phone antenna is fabricated and tested. The experimental results are in decent agreement with simulated one\u2019s with very little variations due to experimental errors. The results showed that the suggested smartphone antenna satisfies the criteria for use in upcoming smart-phones. 28 Apr 2023
PONE-D-23-09968
Dual Polarized 8-port Sub 6 GHz 5G MIMO Antenna for Smart Phone and Portable Wireless Applications
PLOS ONEDear Dr. Muhammad,
\u00a0
Please submit your revised manuscript by Jun 12 2023 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at\u00a0plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE\u2019s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.
A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.Please include the following items when submitting your revised manuscript:If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: We look forward to receiving your revised manuscript.Kind regards,Yuan-Fong Chou ChauAcademic EditorPLOS ONEJournal Requirements:When submitting your revision, we need you to address these additional requirements.1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at\u00a0https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and\u00a0https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdfhttps://journals.plos.org/plosone/s/materials-and-software-sharing#loc-sharing-code and ensure that your code is shared in a way that follows best practice and facilitates reproducibility and reuse.2. Please note that PLOS ONE has specific guidelines on code sharing for submissions in which author-generated code underpins the findings in the manuscript. In these cases, all author-generated code must be made available without restrictions upon publication of the work. Please review our guidelines at 3. Thank you for stating the following in the Acknowledgments Section of your manuscript:\u00a0\u00a0 \"The authors are thankful to the Deanship of Scientific Research at Najran University for funding this work under the Research Groups (NU/ RG/SERC/11/9).\"We note that you have provided funding information that is not currently declared in your Funding Statement. However, funding information should not appear in the Acknowledgments section or other areas of your manuscript. We will only publish funding information present in the Funding Statement section of the online submission form.\u00a0Please remove any funding-related text from the manuscript and let us know how you would like to update your Funding Statement. Currently, your Funding Statement reads as follows:\u00a0\u00a0 \u00a0\"The author(s) received no specific funding for this work.\"Please include your amended statements within your cover letter; we will change the online submission form on your behalf.http://journals.plos.org/plosone/s/data-availability.4. In your Data Availability statement, you have not specified where the minimal data set underlying the results described in your manuscript can be found. PLOS defines a study's minimal data set as the underlying data used to reach the conclusions drawn in the manuscript and any additional data required to replicate the reported study findings in their entirety. All PLOS journals require that the minimal data set be made fully available. For more information about our data policy, please see http://journals.plos.org/plosone/s/data-availability#loc-recommended-repositories. Any potentially identifying patient information must be fully anonymized.\"Upon re-submitting your revised manuscript, please upload your study\u2019s minimal underlying data set as either Supporting Information files or to a stable, public repository and include the relevant URLs, DOIs, or accession numbers within your revised cover letter. For a list of acceptable repositories, please see http://journals.plos.org/plosone/s/data-availability#loc-unacceptable-data-access-restrictions. Note that it is not acceptable for the authors to be the sole named individuals responsible for ensuring data access.Important: If there are ethical or legal restrictions to sharing your data publicly, please explain these restrictions in detail. Please see our guidelines for more information on what we consider unacceptable restrictions to publicly sharing data: We will update your Data Availability statement to reflect the information you provide in your cover letter.5. We note that you have stated that you will provide repository information for your data at acceptance. Should your manuscript be accepted for publication, we will hold it until you provide the relevant accession numbers or DOIs necessary to access your data. If you wish to make changes to your Data Availability statement, please describe these changes in your cover letter and we will update your Data Availability statement to reflect the information you provide.[Note: HTML markup is below. Please do not edit.]Reviewers' comments:Reviewer's Responses to Questions Comments to the Author1. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1:\u00a0YesReviewer #2:\u00a0YesReviewer #3:\u00a0Partly********** 2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1:\u00a0YesReviewer #2:\u00a0N/AReviewer #3:\u00a0N/A********** 3. Have the authors made all data underlying the findings in their manuscript fully available?PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data\u2014e.g. participant privacy or use of data from a third party\u2014those must be specified. The Reviewer #1:\u00a0YesReviewer #2:\u00a0YesReviewer #3:\u00a0Yes********** 4. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.Reviewer #1:\u00a0YesReviewer #2:\u00a0YesReviewer #3:\u00a0No********** 5. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. Reviewer #1:\u00a0This research work demonstrated and practically implemented a mobile phone antenna with dual-polarization capabilities which is a potential candidate for 5G MIMO communications. The antenna layout includes two-port microstrip feed lines with ground plane diamond slots placed at each of the PCB\u2019s four corners. From this reviewer\u2019s point of view promising results have been achieved and well discussed in the well-organized manuscript. So, the results have been experimentally validated and highlighted by providing a fair comparison with state-of-the-art. Although the concept and idea of this work were found interesting and they seem attractive for the scientific society, authors are requested to carefully address the following comments to improve the quality of the manuscript prior to final recommendation.1) Please add the applied design method of the proposed MIMO antenna to the title.2) Please explain some information about the advantage of the proposed decoupling method in the abstract section.3) Average radiation gain and efficiency can be added to the abstract section.4) Introduction section can be improved by adding more explanations along with proper references. For example, more discussions on 5g MIMO antennas are requested. Also, to realize MIMO antennas, isolation between the radiation elements are very important which need to be discussed. There are various decoupling methods which can be briefly mentioned. Below are helpful suggestions.(i) 5G MIMO antenna\"H-shaped Eight-Element Dual-band MIMO Antenna for Sub-6 GHz 5G Smartphone Applications\", IEEE Access, vol. 10, pp. 85619-85629, 2022.\"An Innovative Antenna Array with High Inter Element Isolation for Sub-6 GHz 5G MIMO Communication Systems\", Scientific Reports, 12, 7907, 2022.\"mmWave Four-Element MIMO Antenna for Future 5G Systems\", Applied Sciences, 12(9), 4280, 2022.\"Uni-Planar MIMO Antenna for Sub-6 GHz 5G Mobile Phone Applications\", Applied Sciences, 12(8), 3746, 2022.\"Multiple Elements MIMO Antenna System with Broadband Operation for 5th Generation Smart Phones\", IEEE Access, vol. 10, pp. 38446-38457, 2022.\u201cNovel MIMO Antenna System for Ultra Wideband Applications\u201d, Applied Sciences, 12(7), 3684, 2022.\u201cA high gain multiband offset MIMO antenna based on a planar log-periodic array for Ku/K-band applications\u201d, Scientific Reports, 12, 4044 (2022).\"A Compact CPW-Fed Ultra-Wideband Multi-Input-Multi-Output (MIMO) Antenna for Wireless Communication Networks,\", IEEE Access, vol. 10, pp. 25278-25289, 2022.\"Printed Closely Spaced Antennas Loaded by Linear Stubs in a MIMO Style for Portable Wireless Electronic Devices\", Electronics, 10(22), 2848, 2021.\"MIMO Antenna System for Modern 5G Handheld Devices with Healthcare and High Rate Delivery\" Sensors, 21(21), 7415, 2021.(ii) Decoupling methods to realize MIMO antennas\"A Comprehensive Survey on \"Various Decoupling Mechanisms with Focus on Metamaterial and Metasurface Principles Applicable to SAR and MIMO Antenna Systems\"\", IEEE Access, vol. 8, pp. 192965-193004, 2020.\u201cStudy on Isolation and Radiation Behaviours of a 34\u00d734 Array-Antennas Based on SIW and Metasurface Properties for Applications in Terahertz Band Over 125-300 GHz\u201d, Optik, International Journal for Light and Electron Optics, Volume 206, March 2020, 163222.\"Isolation Enhancement of Densely Packed Array Antennas with Periodic MTM-Photonic Bandgap for SAR and MIMO Systems\", IET Microwaves, Antennas & Propagation, Volume 14, Issue 3, February 2020, pp. 183 - 188.\"Surface Wave Reduction in Antenna Arrays Using Metasurface Inclusion for MIMO and SAR Systems\", Radio Science, 54, 1067\u20131075, 2019.\"Mutual-Coupling Isolation Using Embedded Metamaterial EM Bandgap Decoupling Slab for Densely Packed Array Antennas\", IEEE Access, vol. 7, pp. 5182\u201351840, April 29, 2019.\"Mutual Coupling Suppression Between Two Closely Placed Microstrip Patches Using EM-Bandgap Metamaterial Fractal Loading\", IEEE Access, vol. 7, Page(s): 23606 \u2013 23614, March 5, 2019.\"Interaction Between Closely Packed Array Antenna Elements Using Metasurface for Applications Such as MIMO Systems and Synthetic Aperture Radars\", Radio Science, Volume53, Issue11, November 2018, Pages 1368-1381.\u201cAntenna Mutual Coupling Suppression Over Wideband Using Embedded Periphery Slot for Antenna Arrays\u201d, Electronics, 2018, 7(9), 198.\u201cStudy on Isolation Improvement Between Closely Packed Patch Antenna Arrays Based on Fractal Metamaterial Electromagnetic Bandgap Structures\u201d, IET Microwaves, Antennas & Propagation, Volume 12, Issue 14, 28 November 2018, p. 2241 \u2013 2247.\u201cMeta-surface Wall Suppression of Mutual Coupling between Microstrip Patch Antenna Arrays for THz-band Applications\u201d, Progress in Electromagnetics Research Letters, Vol. 75, page 105-111, 2018.5) Design process of the proposed single antenna should be elaborated in depth. Please explain why authors have realized diagonal rectangular slots on the back? How were its dimensions optimized?6) Quality of the plots are poor, they need to be improved.7) More discussions on the surface current distributions should be added.8) The feeding mechanism of the MIMO antenna should be explained in detail.9) In comparison table 2 please add the terms \u201capplied design method, radiation gain, and design complexity\u201d as well to make it more comprehensive.10) Please extend the conclusion by adding more numerical results and achievements.11) Reference part needs to be improved as per above mentioned suggestions.Reviewer #2:\u00a01. An 8-port MIMO antenna for smartphone application is proposed.2. Author need top carry out a systematic literature review to establish the technical contribution and need for the proposed work.Please refer below paper for the same: Design and Analysis of Wideband Flexible Self-Isolating MIMO Antennas for Sub-6 GHz 5G and WLAN Smartphone Terminals3. Also authors should refer to some latest sub-6 GHz MIMO antennas and do a thorough comparison to establish the novel contribution:a. Compact wideband four element optically transparent MIMO antenna for mm-wave 5G applicationsb. Multiband hybrid MIMO DRA for Sub\u20106 GHz 5G and WiFi\u20106 applicationsc. Dual-band and dual-polarization CPW Fed MIMO antenna for fifth-generation mobile communications technology at 28 and 38 GHzd. Wideband flexible/transparent connected-ground MIMO antennas for sub-6 GHz 5G and WLAN applications4. Improve the quality of all the figures 5. Show the coordinate axis next to the antenna geometry.6. In Fig 6(a), gain should be in dBi and not dB.7. For such configuration, the number of input ports increases significantly.8. Present selected data in some of the graphs for brevity of data.9. Fig 15 is not at all clear.10. Authors should carry out SAR analysis.Reviewer #3:\u00a0The article \"Dual Polarized 8-port Sub 6 GHz 5G MIMO Antenna for Smart Phone and PortableWireless Applications\" need major revision to process to next level1) The design orientation and associated mathematical formulation is missing in the paper2) Figure captions not given with numbers for review3) For eight ports, the transmission coefficient results shown are of low quality and data in zig zag.4) Human phantom model is used in the work for analysis. I have not found any type SAR readings and related matter.5) The CCL and Diversity Gain and TARC parameters for MIMO was not discussed and presented6) 2D plots with several combinations without any analysis is presented********** 6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.If you choose \u201cno\u201d, your identity will remain anonymous but your review may still be made public.Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1:\u00a0NoReviewer #2:\u00a0NoYes:\u00a0Dr. B T P MadhavReviewer #3:\u00a0**********https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at\u00a0figures@plos.org. Please note that Supporting Information files do not need this step.While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool,\u00a0 20 Jun 2023Original Manuscript ID: PONE-D-23-09968Original Article Title: Dual Polarized 8-port Sub 6 GHz 5G MIMO Antenna for Smart Phone and Portable Wireless ApplicationsTo: PLOS ONE EditorRe: We are very thankful to the PLOS ONE journal team and reviewers for such a comprehensive and profound review. We have revised our manuscript in light of their valuable queries and suggestions. We hope our revision has improved the paper quality to a level of reviewers\u2019 satisfaction. The answers to their specific suggestions/queries/comments are given below in detail.Dear Editor,Thank you for allowing a resubmission of our manuscript, with an opportunity to address the reviewers\u2019 comments.We are uploading (a) our point-by-point response to the comments (below) (response to reviewers), (b) an updated manuscript with yellow highlighting indicating changes (Revised Manuscript with Track Changes), and (c) a clean updated manuscript without highlights (Manuscript) (PDF main document).Best regards,Dr. Fazal MuhammadCorresponding Author\u2003Response to Reviewer #1: We would like to thank the reviewer for careful and thorough reading of this manuscript and for the thoughtful comments and constructive suggestions, which help to improve the quality of this manuscript. Our response follows . General Comments: This research work demonstrated and practically implemented a mobile phone antenna with dual-polarization capabilities which is a potential candidate for 5G MIMO communications. The antenna layout includes two-port microstrip feed lines with ground plane diamond slots placed at each of the PCB\u2019s four corners. From this reviewer\u2019s point of view promising results have been achieved and well discussed in the well-organized manuscript. So, the results have been experimentally validated and highlighted by providing a fair comparison with state-of-the-art. Although the concept and idea of this work were found interesting and they seem attractive for the scientific society, authors are requested to carefully address the following comments to improve the quality of the manuscript prior to final recommendation.Reviewer #1 Concerns:1. Please add the applied design method of the proposed MIMO antenna to the title.2. Please explain some information about the advantage of the proposed decoupling method in the abstract section. 3. Average radiation gain and efficiency can be added to the abstract section4. Introduction section can be improved by adding more explanations along with proper references. 5. Design process of the proposed single antenna should be elaborated in depth. Please explain why authors have realized diagonal rectangular slots on the back? How were its dimensions optimized?6. Quality of the plots are poor; they need to be improved.7. More discussions on the surface current distributions should be added.8. The feeding mechanism of the MIMO antenna should be explained in detail.9. In comparison table 2 please add the terms \u201capplied design method, radiation gain, and design complexity\u201d as well to make it more comprehensive.10. Please extend the conclusion by adding more numerical results and achievements.11. Reference part needs to be improved as per above mentioned suggestions.Additional Questions:1. Is the manuscript technically sound, and do the data support the conclusions? Yes2. Has the statistical analysis been performed appropriately and rigorously? Yes3. Have the authors made all data underlying the findings in their manuscript fully available? Yes4. Is the manuscript presented in an intelligible fashion and written in standard English? Yes\u2003Reviewer#1, Concern#1: Please add the applied design method of the proposed MIMO antenna to the title.Author response: Thanks for valuable suggestion, as per respected reviewer we have change the title of our manuscript and added design method to title. Our new title is as under\u201cDual-polarized 8-port sub 6 GHz 5G MIMO diamond-ring slot antenna for smart phone and portable wireless applications\u201d________________________________________Reviewer#1, Concern#2: Please explain some information about the advantage of the proposed decoupling method in the abstract section.Author response: Thank you very much for the valuable suggestions, as per the honorable reviewer suggestions we have added the information about the advantage of the proposed coupling method in the abstract section, as highlight in the revised version of the manuscript.________________________________________Reviewer#1, Concern#3: Average radiation gain and efficiency can be added to the abstract sectionAuthor response: Thanks for the suggestion , the average gain and efficiency are added to abstract section, and highlighted in marked version of manuscript.________________________________________Reviewer#1, Concern#4: Introduction section can be improved by adding more explanations along with proper references.Author response: Thank you very much, as per the honorable suggestion, the introduction section is improved by adding more explanation with proper references, and the changes are highlighted in marked version.________________________________________Reviewer#1, Concern#5: Design process of the proposed single antenna should be elaborated in depth. Please explain why authors have realized diagonal rectangular slots on the back? How were its dimensions optimized?Author response: Thanks for your comment, the design process is elaborated and some mathematical equations are included as shown in revised version. The diagonal slot in ground plane is used to reduce the mutual coupling between antenna elements and increase bandwidth. The dimensions are optimized by doing some parametric study.________________________________________Reviewer#1, Concern#6: Quality of the plots are poor; they need to be improved.Author response: Thanks for very much for the comments , the quality of plots is improved in the revised version and highlighted the revised manuscript.________________________________________Reviewer#1, Concern#7: More discussions on the surface current distributions should be added.Author response: Thank you very much, As per honorable reviewer directions, more discussion on surface currents is added as shown in highlighted version.________________________________________Reviewer#1, Concern#8: The feeding mechanism of the MIMO antenna should be explained in detail.Author response: The feeding mechanism is explained in detail. As the honorable reviewer suggested. Thank you________________________________________Reviewer#1, Concern#9: In comparison table 2 please add the terms \u201capplied design method, radiation gain, and design complexity\u201d as well to make it more comprehensive.Author response: Table 2 is updated as per respected reviewer comment, design method, gain, coupling techniques, and material used columns are added to the table. ________________________________________Reviewer#1, Concern#10: Please extend the conclusion by adding more numerical results and achievements.Author response: Thank you very much, Conclusion is extended by adding more numerical results as per honorable reviewer.________________________________________Reviewer#1, Concern#11: Reference part needs to be improved as per above mentioned suggestions Author response: Reference part is improved according to honorable reviewer suggestion. As highlighted in the references section in the revised version.________________________________________________________________________________\u2003Response to Reviewer #2: We would like to thank the reviewer for careful and thorough reading of this manuscript and for the thoughtful comments and constructive suggestions, which help to improve the quality of this manuscript. Our response follows . General Comments: An 8-port MIMO antenna for smartphone application is proposed.Reviewer #2 Concerns:1. Author needs to carry out a systematic literature review to establish the technical contribution and need for the proposed work. Please refer below paper for the same: Design and Analysis of Wideband Flexible Self-Isolating MIMO Antennas for Sub-6 GHz 5G and WLAN Smartphone Terminals2. Also, authors should refer to some latest sub-6 GHz MIMO antennas and do a thorough comparison to establish the novel contribution.3. Improve the quality of all the figures 4. Show the coordinate axis next to the antenna geometry.5. In Fig 6(a), gain should be in dBi and not dB.6. For such configuration, the number of input ports increases significantly.7. Present selected data in some of the graphs for brevity of data.8. Fig 15 is not at all clear.9. Authors should carry out SAR analysis.Additional Questions:1. Is the manuscript technically sound, and do the data support the conclusions? Yes2. Has the statistical analysis been performed appropriately and rigorously? N/A3. Have the authors made all data underlying the findings in their manuscript fully available? Yes4. Is the manuscript presented in an intelligible fashion and written in standard English? Yes\u2003Reviewer#2, Concern#1: Author needs to carry out a systematic literature review to establish the technical contribution and need for the proposed work.Author response: Thanks for valuable suggestion, as per respected reviewer, the introduction section is improved and a systematic literature review is made with proper references. The changes are highlighted in revised marked version. ________________________________________Reviewer#2, Concern#2: Also, authors should refer to some latest sub-6 GHz MIMO antennas and do a thorough comparison to establish the novel contribution. Author response: Thanks for valuable suggestion, as per respected reviewer some of latest work is referred and compared with the proposed design. The changes are highlighted in revised marked version.________________________________________Reviewer#2, Concern#3: Improve the quality of all the figures Author response: Thank you very much, as per the honorable reviewer comments, the quality of figures is improved.________________________________________Reviewer#2, Concern#4: Show the coordinate axis next to the antenna geometry.Author response: Thank you very much, as per the honorable reviewer, coordinate axis is added next to antenna geometry. See Figure 1 and 9.________________________________________Reviewer#2, Concern#5: In Fig 6(a), gain should be in dBi and not dB.Author response: Thank you very much, as per respected reviewer valuable suggestion, the gain is plotted in dBi instead of dB. See Figure 6(a).________________________________________Reviewer#2, Concern#6: For such configuration, the number of input ports increases significantly.Author response: Thank you very much for the quaries, yes the number of input ports is increased but for 5G the data rate is directly proportional to the number of antenna elements and number of ports. ________________________________________Reviewer#2, Concern#7: Present selected data in some of the graphs for brevity of data.Author response: Thank you very much, as per the honorable reviewer valuable suggestion, some of the symmetrical data has been removed for clarity and brevity of data.________________________________________Reviewer#2, Concern#8: Fig 15 is not at all clear.Author response: Thank you very much for your valuable comment, as per the honorable reviewer suggestion, we have replaced Fig no 15 with a clear one.________________________________________Reviewer#2, Concern#9: Authors should carry out SAR analysis.Author response: Thank you very much for suggestion, as per the honorable review suggestion a detailed SAR analysis is done and the results are presented in revised version. ________________________________________________________________________________\u2003Response to Reviewer #3: We would like to thank the reviewer for careful and thorough reading of this manuscript and for the thoughtful comments and constructive suggestions, which help to improve the quality of this manuscript. Our response follows . General Comments: The article \"Dual Polarized 8-port Sub 6 GHz 5G MIMO Antenna for Smart Phone and Portable Wireless Applications\" need major revision to process to next levelReviewer #3 Concerns:1. The design orientation and associated mathematical formulation is missing in the paper2. Figure captions not given with numbers for review3. For eight ports, the transmission coefficient results shown are of low quality and data in zig zag.4. Human phantom model is used in the work for analysis. I have not found any type SAR readings and related matter.5. The CCL and Diversity Gain and TARC parameters for MIMO was not discussed and presented6. 2D plots with several combinations without any analysis is presentedAdditional Questions:1. Is the manuscript technically sound, and do the data support the conclusions? Partly2. Has the statistical analysis been performed appropriately and rigorously? N/A3. Have the authors made all data underlying the findings in their manuscript fully available? Yes4. Is the manuscript presented in an intelligible fashion and written in standard English? No\u2003Reviewer#3, Concern#1: The design orientation and associated mathematical formulation is missing in the paper.Author response: Thank you very much for bringing our attention to this, as per the honorable reviewer, the mathematical equations are added to the paper.________________________________________Reviewer#3, Concern#2: Figure captions not given with numbers for reviewAuthor response: Sorry for missing this, this time figure captions are added with numbers, Thank you for your comment.________________________________________Reviewer#3, Concern#3: For eight ports, the transmission coefficient results shown are of low quality and data in zig zag.Author response: Thank you very much for the valuable suggestion, as per the honorable reviewer, the quality of plot is improved.________________________________________Reviewer#3, Concern#4: Human phantom model is used in the work for analysis. I have not found any type SAR readings and related matter.Author response: Thank you very much for suggesting SAR analysis, SAR analysis is done and the results are presented in revised version.________________________________________Reviewer#3, Concern#5: The CCL and Diversity Gain and TARC parameters for MIMO was not discussed and presentedAuthor response: Thank you very much for suggesting to add some important MIMO parameters like TARC, DG and channel capacity. The results are calculated and are presented in revised version.________________________________________ Reviewer#3, Concern#6: 2D plots with several combinations without any analysis is presentedAuthor response: Thank you very much for valuable suggestion analysis of 2D plots are added to revised version.________________________________________________________________________________Kind RegardsDr. Fazal Muhammad Corresponding Author 4 Jul 2023Dual Polarized 8-port Sub 6 GHz 5G MIMO Antenna for Smart Phone and Portable Wireless ApplicationsPONE-D-23-09968R1Dear Dr. Muhammad,We\u2019re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.Within one week, you\u2019ll receive an e-mail detailing the required amendments. When these have been addressed, you\u2019ll receive a formal acceptance letter and your manuscript will be scheduled for publication.http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at onepress@plos.org.If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they\u2019ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact Kind regards,Yuan-Fong Chou ChauAcademic EditorPLOS ONEAdditional Editor Comments :Reviewers' comments:Reviewer's Responses to Questions Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the \u201cComments to the Author\u201d section, enter your conflict of interest statement in the \u201cConfidential to Editor\u201d section, and submit your \"Accept\" recommendation.Reviewer #1:\u00a0All comments have been addressedReviewer #2:\u00a0All comments have been addressed********** 2. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1:\u00a0YesReviewer #2:\u00a0Yes********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1:\u00a0YesReviewer #2:\u00a0N/A********** 4. Have the authors made all data underlying the findings in their manuscript fully available?PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data\u2014e.g. participant privacy or use of data from a third party\u2014those must be specified. The Reviewer #1:\u00a0YesReviewer #2:\u00a0Yes********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.Reviewer #1:\u00a0YesReviewer #2:\u00a0Yes********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. Reviewer #1:\u00a0Appropriate modifications have been applied as per requested to improve the quality of the manuscript to an acceptable level.Reviewer #2:\u00a01. The comments give by the reviewers are addressed and implemented properly.2. The manuscript is good to be published in its present form.********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.If you choose \u201cno\u201d, your identity will remain anonymous but your review may still be made public.Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1:\u00a0NoReviewer #2:\u00a0No********** 18 Jul 2023PONE-D-23-09968R1 Dual-polarized 8-port sub 6 GHz 5G MIMO diamond-ring slotantenna for smart phone and portable wireless applications Dear Dr. Muhammad:I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department. onepress@plos.org.If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact plosone@plos.org. If we can help with anything else, please email us at Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staffon behalf ofDr. Yuan-Fong Chou Chau Academic EditorPLOS ONE"} +{"text": "Non-coding RNAs (ncRNAs) can control the flux of genetic information; affect RNA stability and play crucial roles in mediating epigenetic modifications. A number of studies have highlighted the potential roles of both virus-encoded and host-encoded ncRNAs in viral infections, transmission and therapeutics. However, the role of an emerging type of non-coding transcript, circular RNA (circRNA) in severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infection has not been fully elucidated so far. Moreover, the potential pathogenic role of circRNA-miRNA-mRNA regulatory axis has not been fully explored as yet. The current study aimed to holistically map the regulatory networks driven by SARS-CoV-2 related circRNAs, miRNAs and mRNAs to uncover plausible interactions and interplay amongst them in order to explore possible therapeutic options in SARS-CoV-2 infection. Patient datasets were analyzed systematically in a unified approach to explore circRNA, miRNA, and mRNA expression profiles. CircRNA-miRNA-mRNA network was constructed based on cytokine storm related circRNAs forming a total of 165 circRNA-miRNA-mRNA pairs. This study implies the potential regulatory role of the obtained circRNA-miRNA-mRNA network and proposes that two differentially expressed circRNAs hsa_circ_0080942 and hsa_circ_0080135 might serve as a potential theranostic agents for SARS-CoV-2 infection. Collectively, the results shed light on the functional role of circRNAs as ceRNAs to sponge miRNA and regulate mRNA expression during SARS-CoV-2 infection. Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), an enveloped RNA virus with a genome size of 29,903 bp, is a highly infectious and pathogenic coronavirus . InitialRecent advances in high-throughput sequencing technologies and computational methods have discovered a substantial number of ncRNA ultimately providing new insights into their role in a range of human diseases , 7. VariOn the other hand, circRNAs, a novel class of ncRNAs have been reported to play crucial role in regulating viral infections and their dysregulation has been implicated in the pathogenesis of various diseases , 18. WhiThe circRNA\u2013miRNA\u2013mRNA regulatory axis has been shown to be of high importance in association with several human diseases including cancers, diabetes, Alzheimer\u2019s disease, and cardiovascular diseases \u201325. ThesThere is an abundance of COVID-19 related transcriptomics studies and data, however, their use is limited by the confounding factors pertaining to each study. In the current study, we have analyzed different datasets in a unified approach which might help in understanding the molecular basis of COVID-19. Moreover, reverse engineering approach was utilized to derive regulatory interactions between circRNAs, miRNAs and mRNAs from gene expression data of SARS-CoV-2 patients. In order to gain better understanding of molecular and immuno-pathological basis, possible regulatory mechanisms of circRNA-miRNA-mRNA axis during SARS-CoV-2 infection were investigated. The circRNA\u2013miRNA\u2013mRNA regulatory network consisting of differentially expressed circRNAs and their downstream miRNAs and target mRNAs have been constructed for SARS CoV-2 related pathogenesis. The circRNAs that may play critical roles in regulating the cytokine storm during SARS-CoV-2 infection were identified. The results from this study revealed some candidate circRNAs that might function as potential theranostic agents in SARS-CoV-2 infection. Moreover, targeting the \"cytokine storm\" using circRNAs might be a feasible therapeutic approach to combat COVID-19 . Togethehttps://www.ncbi.nlm.nih.gov/geo/), a database supported by the National Center for Biotechnology Information (NCBI) at the National Library of Medicine (NLM) was used to access the microarray and RNA-sequencing datasets that contains circRNA, miRNA and mRNA expression profiles of SARS-CoV-2 infected patients at various stages (https://ngdc.cncb.ac.cn/). As the present study does not involve human subjects and due to free availability of data in the GEO database, neither ethical approval nor informed consent was required.The Gene Expression Omnibus (GEO) ..https://The GSE166552 dataset included 06 samples (03 SARS-CoV-2 positive and 03 controls). The PRJCA002617 dataset included 24 samples (12 SARS-CoV-2 positive and 12 controls). McDonald et al., dataset included 45 samples . The GSE19137 dataset included 21 samples (03 negative and 18 positive for SARS-CoV). Chow et al., dataset included 249 samples (147 SARS-CoV-2 infected samples and 102 controls). Dhar et al., dataset included 2157 samples (including 915 severe COVID-19 patients). Liu et al., dataset contained 40 samples in his study (including 13 severe COVID-19 patients). Farr et al., dataset included 20 samples . Li et al., dataset included 14 samples . Huang et al., dataset comprised of 41 samples (including 13 severe COVID-19 patients). Chi et al., dataset included 70 SARS-CoV-2 infected patients, 04 convalescent cases and 04 healthy controls. Lin et al., dataset included 334 samples in their study (including 23 severe COVID-19 patient samples). Chen et al(b)., dataset included 21 samples (including 11 severe COVID-19 patient samples). Chen et al(c)., dataset study contained 29 samples (including 14 severe COVID-19 patient samples). Blenco Melo et al., included 48 samples (24 SARS-CoV-2 positive samples and 24 negative samples). Del Velle et al., dataset included a total of 1484 samples (1097 positive for SARS-CoV-2 infection and 387 controls). Qin et al., dataset comprised of 452 samples in their study (including 286 severe COVID-19 patient samples). Yang et al., dataset included 50 samples (including 36 severe COVID-19 patient samples).http://www.circbank.cn/) is a comprehensive, publicly available, functionally annotated human circRNAs database containing information of about 140,000 circRNAs from many different sources Please clarify whether this [conference proceeding or publication] was peer-reviewed and formally published. If this work was previously peer-reviewed and published, in the cover letter please provide the reason that this work does not constitute dual publication and should be included in the current manuscript.2. We noted in your submission details that a portion of your manuscript may have been presented or published elsewhere. https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at\u00a0figures@plos.org. Please note that Supporting Information files do not need this step.While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool,\u00a0AttachmentManuscript Number PONE D 22 34657.docxSubmitted filename: Click here for additional data file. 23 Feb 2023Response to reviewersDear Editor,We wish to express our appreciation for your in-depth comments, suggestions and corrections, and we would like to convey our sincere thanks for allowing us to improve our manuscript entitled \u201cMapping CircRNA\u2013miRNA\u2013mRNA Regulatory Axis Identifies hsa_circ_0080942 and hsa_circ_0080135 as a potential theranostic agents for SARS-CoV-2 infection\u201d. Thank you for your very careful review of our paper. A major revision of the paper has been carried out to take all of weaknesses and limitations identified by the respective reviewers into account. And in the process, we truly hope that the revised manuscript is clear enough to follow.Below is an abridged summary of reviewer\u2019s comments with a detailed response and description of the changes made to the article. Should you find the paper requires further clarification or revision, we most certainly stand ready to do so.Looking forward to your positive responseSincerely,Dr. Faryal Mehwish AwanJournal Requirements:When submitting your revision, we need you to address these additional requirements.Comment # 1Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. ResponseWe have formatted our manuscript according to PLOS ONE\u2019s style. The format has been updated in the revised version of the manuscript as per journal requirements. Comment # 2https://www.ncbi.nlm.nih.gov/gds/) with accession details , , , , , (Demirci and Demirci 2021), , , , , , , , , , , and datasets).] Please clarify whether this [conference proceeding or publication] was peer-reviewed and formally published. If this work was previously peer-reviewed and published, in the cover letter please provide the reason that this work does not constitute dual publication and should be included in the current manuscript. We noted in your submission details that a portion of your manuscript may have been presented or published elsewhere. [All the datasets analyzed during the current study are accessible from the literature as well as from the GEO database repository archives and freely distributes microarray, next-generation sequencing and other forms of high-throughput functional genomics data. Such datasets hold great value for knowledge discovery, particularly when integrated and can potentially bring novel insights into essential questions. The present study has aimed at prioritizing the potential circRNA candidates for a prospective theranostic evaluation via exploring the existing publically available datasets in COVID-19 setting. Public databases have a lot of high throughput data, which greatly helps in revealing the possible disease pathogenesis and identifying potential targets for drug design. Experimental validation of all the discovered associations, let alone all the possible interactions between them, is time-consuming and expensive. In conventional approaches, large experimental screenings are currently used to identify potential leading compounds, but they require significant time and resources. However, one of the lessons learned during the current pandemic is that innovative approaches are required to speed up drug development while increasing its success rate. Since gene expression data are high-dimensional data, an important research aim in the analysis of transcription profiles is the discovery of small subset of biomarkers containing the most discriminant information. Therefore, current study computationally prioritized the data available in the databases for potential SARS-CoV-2 inhibitors by using an integrated approach. A number of recently published research studies have utilized publically available datasets for the prioritization of most promising candidates. Some of these studies are We have added this information in the cover letter to clarify this comment. Comment: 3https://blogs.plos.org/plos/2019/06/looking-good-tips-for-creating-your-plos-figures-graphics/\" https://blogs.plos.org/plos/2019/06/looking-good-tips-for-creating-your-plos-figures-graphics/Please upload a new copy of Figure 7 as the detail is not clear. Please follow the link for more information: ResponseWe agree with your assessment. We have now followed figure graphics requirements for all the figures and have also uploaded a new copy of Figure 7 as \u201cFig7.tif\u201d. Fig 7: Pathway analysis of COVID-19 pathogenesis (KEGG pathway ID: map05171). Highlighted genes are targets of miRNAs and indirect targets of two prioritized circRNAs Comment: 4Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article\u2019s retracted status in the References list and also include a citation and full reference for the retraction notice.ResponseWe have rechecked the whole manuscript for any errors in the references. Corrections have been made and highlighted. Reviewer #1Comments and Suggestions for AuthorsThis is an interesting small study and the authors have collected a unique dataset using cutting edge methodology. The paper is well written and structured.ResponseWe appreciate the positive feedback from the reviewer and would like to thank the respected reviewer for the encouraging assessment and the comments that helped us to improve the manuscript. Comment: 1The following minor issues should be addressed:In introduction section first paragraph author may update number of total people have been infected and death globally with virus SARS-CoV-2 with updated references.ResponseWe agree with your assessment. As per your suggestion, we have updated total number as per World Health Organization report and revised the statement as follows. Currently, >750 million people have been infected globally while >6.8 million people have lost their lives due to COVID-19 .We have updated this information in the revised version of the manuscript.Reviewer #2Comments and Suggestions for AuthorsThe manuscript entitled Mapping CircRNA\u2013miRNA\u2013mRNA Regulatory Axis Identifies hsa_circ_0080942 and hsa_circ_0080135 as a potential theranostic agents for SARS-CoV-2 infection can be accepted for publication after few improvementsResponseWe appreciate the positive feedback from the reviewer and would like to thank the respected reviewer for the encouraging assessment and the comments that helped us to improve the manuscript. Comment: 1The author have selected the common circRNAs from circRNA datasets, mRNA datasets, and miRNA datasets. How the author found cicRNAs from mRNAs and miRNAs datasets? The authors are suggested to clearly describe the results in detail rather than superficial writing.Responsehttp://www.circbank.cn/) is a comprehensive, publicly available, functionally annotated human circRNAs database containing information of about 140,000 circRNAs from many different sources . The Users can access information regarding conservation status, miRNA targets as well as protein coding potential of query circRNAs . CircInteractome (https://circinteractome.nia.nih.gov/) is a readily accessible web tool for mapping miRNAs and protein-binding sites on junctions as well as junction-flanking sequences of human circRNAs . RNA Interactome Database, RNAInter v4.0 (http://www.rnainter.org/) is a comprehensive RNA-associated interactome platform containing information of more than 41 million interactions of cellular RNAs in 154 species with evidence from both computational and experimental sources . Selection criteria, threshold and prediction scores for each database were selected on the basis of their previously reported relationship with low false discovery rate and high accuracy in experimental validation studies via PCR and Luciferase assays. The target circRNAs of differentially expressed miRNAs were predicted using different comprehensive databases including CircBank, CircInteractome and RNAInter v4.0 web tools (Table 2). CircBank (http://mirdb.org/), is an integrative, freely accessible, open platform for the prediction of miRNA targets. miRNA-target interactions with scores \u226580.0 were considered relevant, statistically significant and with higher confidence in the interactions whereas miRNA-target interactions with scores \u226480.0 were considered not relevant. By utilizing high-throughput experimental data, miRDB predicts miRNA targets in five species along with integrative analysis of gene ontology (GO) data . miRWalk 2.0 provides information of more than 949 million computationally predicted as well as experimentally validated miRNA-mRNA interactions. In order to ensure reliability and accuracy of forecast results, miRWalk 2.0 incorporates 12 algorithms for prediction including miRWalk, mirbridge, Targetscan, Microt4, PITA, Pictar2, RNAhybrid, RNA22, miRNAMap, miRanda, miRMap and miRDB . Cut-off value with a binding score of > 0.95 was considered as a screening threshold. miRTarBase (https://miRTarBase.cuhk.edu.cn/~miRTarBase/miRTarBase_2022/php/index.php) is a manually curated database containing information of more than 360,000 experimentally validated miRNA-mRNA interactions . miRNA-mRNA interactions have been validated experimentally using microarray, CLIP-seq technology, reporter assays, high through-put sequencing and western blot experiments . All the targets identified via miRTarBase were selected for further analysis. TargetScan v7.0 (http://www.targetscan.org/vert_80/), a flexible web based tool, predicts sequence based effective regulatory targets of miRNAs by incorporating 14 different features . Conservation aggregate score of > 0.80 was considered as selection criteria as this score provides low false discovery rates. An overlap in at least two databases was used as filtering criteria for prioritizing and considering potential candidate targets. Previous comparative studies conducted on miRNA target prediction programs suggested that no program performed consistently superior to all others. Indeed, it has become a common practice for researchers to look at predictions produced by different miRNA-target prediction programs and focus on their intersection which might enhances the performance of analyses as well as improve prediction precision. The differences between algorithms are mostly seen in their respective weaknesses, i.e., the subset of false positives. For that reason, the fundamental motivation to focus selectively on the shared prediction by two algorithms is to eliminate false positives while preserving the vast majority of true positive RNAs. Therefore, conclusively, predictions are much more reliable when two or more prediction algorithms are combined, and the minimal loss of true positives are greatly outweighed by the removal of false positives. Comprehensive analysis of differentially expressed miRNA datasets revealed 38937 target circRNAs. On the other hand, comprehensive analysis of 5109 predicted miRNAs against differentially expressed mRNAs revealed 858423 circRNAs having binding sites for respective miRNAs.For the prediction of potential circRNAs associated with differentially expressed mRNAs, first we predicted miRNAs associated with these mRNA and then circRNAs associated with predicted miRNAs. We used databases including miRDB, miRWalk 2.0, miRTarBase, and TargetScan 7.0 for the prediction of miRNAs associated with respective mRNAs. miRDB . Highlighted genes are targets of miRNAs and indirect targets of two prioritized circRNAs Comment: 3The quality of image needs to be improved as picture is blur and not clearResponseWe agree with your assessment. We have now uploaded a new copy of Figure 7 as \u201cFig7.tif\u201d. In addition we have also updated \u201cTable 4\u201d in tabular text form previously uploaded as a figure due to complexity of data. Fig 7: Pathway analysis of COVID-19 pathogenesis (KEGG pathway ID: map05171). Highlighted genes are targets of miRNAs and indirect targets of two prioritized circRNAs \u2003Table 4: Datasets used for the analysis of SARS-CoV-2 related cytokines Genes Study 1 Study 2 Study 3 Study 4 Study 5 Study 6 Study 7 Study 8 Study 9 Study 10 Study 11 Study 12 up-regulated down-regulated up-regulated down-regulated up-regulated down-regulated up-regulated down-regulated up-regulated down-regulated up-regulated down-regulated up-regulated down-regulated up-regulated down-regulated up-regulated down-regulated up-regulated down-regulated up-regulated down-regulated up-regulated down-regulatedIL-1\u03b2 _ _ _ _ _ _ _ _ \u221a _ _ _ \u221a _ _ _ _ _ _ _ \u221a _ _ _IL-2 \u221a _ _ _ _ _ _ _ _ _ _ _ \u221a _ _ _ _ _ _ _ _ _ \u221a _IL-4 _ _ _ _ _ _ _ _ _ _ _ _ \u221a _ _ _ _ _ _ _ _ _ \u221a _IL-6 _ _ \u221a _ \u221a _ _ _ \u221a _ \u221a _ \u221a _ \u221a _ \u221a _ \u221a _ \u221a _ \u221a _IL-7 \u221a _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \u221a _ _ _ _ _IL-8 _ _ _ _ _ _ _ _ \u221a _ \u221a _ _ _ \u221a _ _ _ \u221a _ _ _ _ _IL-10 \u221a _ _ _ \u221a _ _ _ \u221a _ _ _ \u221a _ _ _ _ _ \u221a _ _ _ \u221a _IL-12 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _IL-13 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \u221a _ _ _IL-17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _IL-18 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \u221a _ \u221a _ _ _IL-23 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _IL-33 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ IL-37 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _IL-38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _TNF-\u03b1 \u221a _ _ _ \u221a _ _ _ \u221a _ \u221a _ \u221a _ \u221a _ _ _ _ _ _ _ \u221a _IFN-\u03b3 _ _ _ _ \u221a _ _ _ _ _ _ _ \u221a _ _ _ _ _ _ _ _ _ \u221a _CCL2 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \u221a _ _ _ _ _ _ _CXCL6 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _CXCL8 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \u221a _ _ _ _ _ _ _CXCL10 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \u221a _ _ _ _ _ _ _ IP-10 \u221a _ _ _ _ _ \u221a _ _ _ _ _ _ _ _ _ _ _ \u221a _ \u221a _ _ _MIP-1A \u221a _ _ _ _ _ \u221a _ _ _ _ _ _ _ _ _ _ _ \u221a _ \u221a _ _ _MIP1-B _ _ _ _ _ _ \u221a _ _ _ _ _ _ _ _ _ _ _ _ _ \u221a _ _ _PDGF _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \u221a _ _ _MCP1 \u221a _ _ _ _ _ \u221a _ _ _ _ _ _ _ _ _ _ _ \u221a _ _ _ _ _ GM-CSF _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _M-CSF _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \u221a _ \u221a _ _ _G-CSF \u221a _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \u221a _ \u221a _ _ _FGF _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _HGF _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \u221a _ _ _TGF-\u03b2 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ We hope that our additions to the manuscript will satisfy the reviewers, and thank both reviewers for their precise and insightful comments, and for the careful attention that they have paid which allowed us to improve the manuscript. We look forward to hearing from you regarding our submission. We would be glad to respond to any further questions and comments that you may have.Reviewer's Responses to QuestionsComments to the Author1. Is the manuscript technically sound, and do the data support the conclusions?Reviewer #1: YesReviewer #2: Yes2. Has the statistical analysis been performed appropriately and rigorously?Reviewer #1: YesReviewer #2: Yes3. Have the authors made all data underlying the findings in their manuscript fully available?Reviewer #1: YesReviewer #2: Yes4. Is the manuscript presented in an intelligible fashion and written in standard English?Reviewer #1: YesReviewer #2: YesReferencesAgarwal, V., et al. (2015) Predicting effective microRNA target sites in mammalian mRNAs, elife, 4, e05005.Blanco-Melo, D., et al. (2020) Imbalanced host response to SARS-CoV-2 drives development of COVID-19, Cell, 181, 1036-1045. e1039.Chen, G., et al. (2020) Clinical and immunological features of severe and moderate coronavirus disease 2019, The Journal of clinical investigation, 130, 2620-2629.Chen, L., et al. (2020) Analysis of clinical features of 29 patients with 2019 novel coronavirus pneumonia, Zhonghua jie he he hu xi za zhi= Zhonghua jiehe he huxi zazhi= Chinese journal of tuberculosis and respiratory diseases, 43, E005-E005.Chi, Y., et al. (2020) Serum cytokine and chemokine profile in relation to the severity of coronavirus disease 2019 in China, The Journal of infectious diseases, 222, 746-754.Dat, V.H.X., et al. (2022) Identification of potential microRNA groups for the diagnosis of hepatocellular carcinoma (HCC) using microarray datasets and bioinformatics tools, Heliyon, 8, e08987.Del Valle, D.M., et al. (2020) An inflammatory cytokine signature predicts COVID-19 severity and survival, Nature medicine, 26, 1636-1643.Dhar, S.K., et al. (2021) IL-6 and IL-10 as predictors of disease severity in COVID-19 patients: results from meta-analysis and regression, Heliyon, 7, e06155.Dudekula, D.B., et al. (2016) CircInteractome: a web tool for exploring circular RNAs and their interacting proteins and microRNAs, RNA biology, 13, 34-42.Dweep, H. and Gretz, N. (2015) miRWalk2. 0: a comprehensive atlas of microRNA-target interactions, Nature methods, 12, 697-697.Farr, R., et al. (2021) Altered microRNA expression in COVID-19 patients enables identification of SARS-CoV-2 infection.Huang, C., et al. (2020) Clinical features of patients infected with 2019 novel coronavirus in Wuhan, China, The lancet, 395, 497-506.Huang, H.-Y., et al. (2020) miRTarBase 2020: updates to the experimentally validated microRNA\u2013target interaction database, Nucleic acids research, 48, D148-D154.Kang, J., et al. (2021) RNAInter v4. 0: RNA interactome repository with redefined confidence scoring system and improved accessibility, Nucleic acids research.Lin, L., et al. (2020) Long-term infection of SARS-CoV-2 changed the body's immune status, Clinical Immunology, 218, 108524.Liu, J., et al. (2020) Longitudinal characteristics of lymphocyte responses and cytokine profiles in the peripheral blood of SARS-CoV-2 infected patients, EBioMedicine, 55, 102763.Liu, M., et al. (2019) Circbank: a comprehensive database for circRNA with standard nomenclature, RNA biology, 16, 899-905.Pandya, P.H., et al. (2020) Systems biology approach identifies prognostic signatures of poor overall survival and guides the prioritization of novel bet-chk1 combination therapy for osteosarcoma, Cancers, 12, 2426.Qin, C., et al. (2020) Dysregulation of immune response in patients with coronavirus 2019 (COVID-19) in Wuhan, China, Clinical infectious diseases, 71, 762-768.Shams, R., et al. (2020) Identification of potential microRNA panels for pancreatic cancer diagnosis using microarray datasets and bioinformatics methods, Scientific Reports, 10, 7559.Venugopal, P., et al. (2022) Prioritization of microRNA biomarkers for a prospective evaluation in a cohort of myocardial infarction patients based on their mechanistic role using public datasets, Frontiers in Cardiovascular Medicine, 9.Wang, X. (2008) miRDB: a microRNA target prediction and functional annotation database with a wiki interface, Rna, 14, 1012-1017.Wu, A.T., et al. (2021) Multiomics identification of potential targets for Alzheimer disease and antrocin as a therapeutic candidate, Pharmaceutics, 13, 1555.Yang, Y., et al. (2020) Plasma IP-10 and MCP-3 levels are highly associated with disease severity and predict the progression of COVID-19, Journal of Allergy and Clinical Immunology, 146, 119-127. e114.Zhang, P., et al. (2021) Bioinformatics analysis of candidate genes and pathways related to hepatocellular carcinoma in China: a study based on public databases, Pathology and Oncology Research, 13.AttachmentResponse to reviewers.docxSubmitted filename: Click here for additional data file. 13 Mar 2023Mapping CircRNA\u2013miRNA\u2013mRNA Regulatory Axis Identifies hsa_circ_0080942 and hsa_circ_0080135 as a potential theranostic agents for SARS-CoV-2 infectionPONE-D-22-34657R1Dear Dr. Awan,We\u2019re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.Within one week, you\u2019ll receive an e-mail detailing the required amendments. When these have been addressed, you\u2019ll receive a formal acceptance letter and your manuscript will be scheduled for publication.http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at onepress@plos.org.If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they\u2019ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact Kind regards,Kanhaiya Singh, Ph.DAcademic EditorPLOS ONEAdditional Editor Comments :Reviewers' comments:Reviewer's Responses to Questions Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the \u201cComments to the Author\u201d section, enter your conflict of interest statement in the \u201cConfidential to Editor\u201d section, and submit your \"Accept\" recommendation.Reviewer #2:\u00a0All comments have been addressed********** 2. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #2:\u00a0Yes********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #2:\u00a0Yes********** 4. Have the authors made all data underlying the findings in their manuscript fully available?PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data\u2014e.g. participant privacy or use of data from a third party\u2014those must be specified. The Reviewer #2:\u00a0Yes********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.Reviewer #2:\u00a0Yes********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. Reviewer #2:\u00a0Following the suggestions, the author has addressed the issues and modified the manuscript accordingly and can be considered for the publication********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.If you choose \u201cno\u201d, your identity will remain anonymous but your review may still be made public.Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #2:\u00a0No********** 4 Apr 2023PONE-D-22-34657R1 Mapping CircRNA\u2013miRNA\u2013mRNA Regulatory Axis Identifies hsa_circ_0080942 and hsa_circ_0080135 as a potential theranostic agents for SARS-CoV-2 infection Dear Dr. Awan:I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department. onepress@plos.org.If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact plosone@plos.org. If we can help with anything else, please email us at Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staffon behalf ofDr. Kanhaiya Singh Academic EditorPLOS ONE"} +{"text": "K\u03b1 radiation to single-crystal X-ray crystallography and evaluates two different detectors, the Bruker Photon III and the Dectris Eiger2 CdTe, for usage in spherical and aspherical structural models.This paper communicates the first application of MetalJet In K\u03b1 radiation wavelengths for use in X-ray diffraction experiments. The purpose of this paper is to demonstrate the application of indium K\u03b1 radiation in independent-atom model refinement, as well as approaches using aspherical atomic form factors. The results vary greatly depending on the detector employed, as the energy cut-off of the Eiger2 CdTe provides a solution to a unique energy contamination problem of the MetalJet In radiation, which the Photon III detector cannot provide.The MetalJet source makes available new Finally, a C11H10O2S YLID crystal, 5, was used to compare the two detectors and the setup with a second machine using aspherical refinements. The YLID crystal is an established benchmark for IAM refinements but can also be used for benchmarks at low temperatures and Incoatec Helios optics. With the Bruker Photon III detector, gallium contamination was filtered using a palladium foil of 40\u2005\u00b5m thickness. For the Eiger2 CdTe 1M detector, a custom solution within the D8 Venture was implemented. Steering using Diffraction data with silver radiation were collected using a Bruker D8 Venture four-circle diffractometer with an Incoatec I\u03bcS 3.0 Ag source and a Bruker Photon III detector as is.2.3.SAINT was performed using structure factors from periodic projector augmented wave (PAW) density functional theory (DFT) calculations. They employ the SCAN functional is less than half the energy of indium K\u03b1 % for the Eiger2 and 0.44\u2005(4)% for the Photon III detector. The relative percentages are more indicative than the absolute ones, as effects such as scattering cross sections and absorption have been neglected but should be the same for both detectors. Therefore, we could show a very small residual contamination with a slight advantage of the Eiger2 detector. A higher thickness of the palladium attenuator would have solved the problem, but would have led to a lower overall indium intensity.The second, more quantitative, approach used a pseudo-twin refinement of the two sets of reflections to get an impression of the relative strength of the contamination. Cell parameters and orientation and instrument parameters were determined in a first integration with 3.2.Rr.i.m shows the superior performance of the Eiger2 detector for each of the structures studied in this work for both the overall and multiplicity-equivalent data sets.The redundancy-independent merge factor and egross follow the I/\u03c3 indicator of the data collection. Accordingly, we see the superior performance of the agreement with the data collected by the Eiger2 CdTe on the In MetalJet, separated by a significant margin from the I\u03bcS Ag measurement using the Photon III. The In MetalJet measurement on the Photon III shows the highest values and number of undescribed electrons egross for both evaluated methodologies. For the unweighted agreement factor, both Photon III measurements yield the same value for the HAR. However, for the multipole refinement the Photon III In measurement exhibits a higher value than the Photon III measurement using Ag K\u03b1 radiation.The crystallographic quality indicators are very similar for both methods Figs. 3 and 4 \u25b8:wR2 and the goodness of fit (GOF). For the Ag/Photon III and In/Eiger2 data, the performance of these two indicators is basically identical for the HAR, while the multipole refinement shows a slight advantage for the Ag/Photon III. The In/Photon III data show inferior performance for these indicators as well.The conclusion is less clear on the weighted crystallographic agreement factor The Photon III measurements show a higher value in the negative difference electron density for both minima, and also an overall shift in the Henn\u2013Meindl plots. The I\u03bcS Ag data also show a slightly lower maximum in the difference electron density, whereas the In/Photon III data set again exhibits the most pronounced maxima and minima in the difference electron density.The In/Photon III measurement shows a significant jump in the quotient of the sum of observed to the sum of fitted intensities (supporting information) closely follow the Henn\u2013Meindl plots. All refinements show a low level of difference electron density. The indium MetalJet data obtained with the Photon III detector show a noisy overall difference electron density at an isolevel \u00b10.05\u2005e\u2005\u00c5\u22123, with the highest features being located near the heaviest atom, namely the ylid sulfur. At the same time the I\u03bcS 3.0 Ag data with the Photon III show a disposition towards a negative difference electron density, which can be explained by the intensity of the inner data matching less accurately, as observed in the DRKPlot-type plot. In comparison, the difference electron density obtained by the indium MetalJet with the Eiger2 CdTe is much flatter. The visible features at the same low isolevel (\u00b10.05\u2005e\u2005\u00c5\u22123) are limited to the vicinity of the sulfur atom and the oxygen atoms, while also being less strongly expressed at these positions. The resulting difference electron densities of the multipolar refinements are comparable for the three investigations . In the I\u03bcS 3.0 Ag/Photon III data, the increased number of parameters does counteract the overall negative density near the sulfur atom and part, but not all, of the discrepancy is assigned to the density, as investigated in the next section.The difference electron densities of the Hirshfeld atom refinement Fig. 5 and the 3.4.2.et al. ScCoC_eiger, ScCoC_photon, ScPtSi_eiger, ScPtSi_photon, NaWO4_eiger, NaWO4_photon, LAla_eiger, LAla_photon, Ylid_HAR_Ag_Photon, Ylid_HAR_In_Eiger, Ylid_HAR_In_Photon, Ylid_MM_Ag_Photon, Ylid_MM_In_Eiger, Ylid_MM_In_Photon. DOI: 10.1107/S1600576723007215/nb5354ScCoC_eigersup2.hklStructure factors: contains datablock(s) ScCoC_eiger. DOI: 10.1107/S1600576723007215/nb5354ScCoC_photonsup3.hklStructure factors: contains datablock(s) ScCoC_photon. DOI: 10.1107/S1600576723007215/nb5354ScPtSi_eigersup4.hklStructure factors: contains datablock(s) ScPtSi_eiger. DOI: 10.1107/S1600576723007215/nb5354ScPtSi_photonsup5.hklStructure factors: contains datablock(s) ScPtSi_photon. DOI: 10.1107/S1600576723007215/nb5354NaWO4_eigersup6.hklStructure factors: contains datablock(s) NaWO4_eiger. DOI: 10.1107/S1600576723007215/nb5354NaWO4_photonsup7.hklStructure factors: contains datablock(s) NaWO4_photon. DOI: 10.1107/S1600576723007215/nb5354LAla_eigersup8.hklStructure factors: contains datablock(s) LAla_eiger. DOI: 10.1107/S1600576723007215/nb5354LAla_photonsup9.hklStructure factors: contains datablock(s) LAla_photon. DOI: 10.1107/S1600576723007215/nb5354Ylid_HAR_Ag_Photonsup10.hklStructure factors: contains datablock(s) Ylid_HAR_Ag_Photon. DOI: 10.1107/S1600576723007215/nb5354Ylid_HAR_In_Eigersup11.hklStructure factors: contains datablock(s) Ylid_HAR_In_Eiger. DOI: 10.1107/S1600576723007215/nb5354Ylid_HAR_In_Photonsup12.hklStructure factors: contains datablock(s) Ylid_HAR_In_Photon. DOI: 10.1107/S1600576723007215/nb5354Ylid_MM_Ag_Photonsup13.hklStructure factors: contains datablock(s) Ylid_MM_Ag_Photon. DOI: 10.1107/S1600576723007215/nb5354Ylid_MM_In_Eigersup14.hklStructure factors: contains datablock(s) Ylid_MM_In_Eiger. DOI: 10.1107/S1600576723007215/nb5354Ylid_MM_In_Photonsup15.hklStructure factors: contains datablock(s) Ylid_MM_In_Photon. DOI: 10.1107/S1600576723007215/nb5354sup16.pdfAdditional refinement details. DOI:"} +{"text": "HemaSphere. 2023;7(9):e942), a correction is needed for Table 3 and Table 4. Bullet points and subheaders were misaligned, therefore reducing clarity for readers. This has now been adjusted. The journal office apologizes for any inconvenience.Since the publication of the article entitled \u201cMeasurable Residual Disease Testing in Multiple Myeloma Routine Clinical Practice: A Modified Delphi Study\u201d (https://journals.lww.com/hemasphere/fulltext/2023/09000/measurable_residual_disease_testing_in_multiple.12.aspxThe changes have been made online:"} +{"text": "Peroral endoscopic myotomy (POEM) is an increasingly adopted strategy for the treatment of Zenker\u2019s diverticulum . The Z-Video\u20061\u2002Peroral endoscopic myotomy for Zenker\u2019s diverticulum without tunneling.After the initial mucosal incision, both submucosal sides of the septum are lifted, with a mixture of hydroxyethyl starch and indigo carmine. Then we proceed to direct myotomy of the septum . The diEndoscopy_UCTN_Code_TTT_1AO_2AG"} +{"text": "Correction to: Journal of Nuclear Cardiologyhttps://doi.org/10.1007/s12350-022-03164-5The original published Supplementary File I was incorrect and is replaced with the following Supplementary File: \u201c12350_2022_3164_MOESM2_ESM\u201d.The original article has been corrected."} +{"text": "The recruitment process for the position of ECDC Director is handled by the European Commission.\u00a0The deadline for the submission of applications is 26 June 2023, 12:00 noon CEST.For detailed information on the job description, requirements and the application procedure, please visit:https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=OJ%3AJOC_2023_185_A_0001"} +{"text": "Bone mineral content (Z_BMC), fat mass (Z_FM), lean mass (Z_LM), and bone mineral density for the total body (Z_TB) and lumbar spine (Z_L1\u20134) were measured using dual-energy X-ray absorptiometry. Twenty-nine percent of children were vitamin D deficient (25-hydroxyvitaminD level of <20 ng/mL). In winter, low vitamin D intake (P = 0.019) and fewer daylight hours (P = 0.015) were associated with low 25-hydroxyvitaminD level. The 25-hydroxyvitamin D level correlated positively with Z_BMC (P = 0.023), Z_TB (P = 0.018), and Z_L1\u20134 (P = 0.043) independently of sex, puberty, Z_FM, Z_LM, physical activity level, and calcium intake. Z_FM correlated independently with Z_BMC (P< 0.001), Z_TB (P = 0.037), and Z_L1\u20134 (P < 0.001). In conclusion, almost half of peripubertalnonobese children were vitamin D deficient in winter. Considering the beneficial effects of adequate vitamin D status and adiposity on bone health, the current DRI of vitamin D should be upgraded to prevent vitamin D deficiency.The dietary reference intake (DRI) of vitamin D for Korean children wasreduced from 400IU/day in 2005 to 200IU/day in 2010. We evaluated the risk factors for low vitamin D status and itsrelationships with bone health in peripubertalnonobese children living in Seoul or Gyeonggi-do. One hundred children participated in the winter ("} +{"text": "There was an error in the list of accession numbers in Supplementary File S1 in the Supporting Information. Please view the correct version of S1 here: https://dl.dropboxusercontent.com/u/63557304/File_S1_Updated.pdf"} +{"text": "The correct sequences of the primers described in this work are HW_MSchX_F-TGGGTGAAGTAAGTCCAACAG and HW_MSchX_R-TGAAGAACGTATCCAGCCTACA. The amplified fragment corresponds to nucleotides 21227\u201322122 of the human chromosome Xq22.3 (GenBank ID AL390039.10)."} +{"text": "Remaining problem cases are traced to inaccuracies in the Rosetta all-atom energy function. In five additional blind tests, SWA achieves sub-Angstrom accuracy models, including the first such success in a protein/RNA binding interface, the YbxF/kink-turn interaction in the fourth \u2018RNA-puzzle\u2019 competition. These results establish all-atom enumeration as an unusually systematic approach to ab initio protein structure modeling that can leverage high performance computing and physically realistic energy functions to more consistently achieve atomic accuracy.Consistently predicting biopolymer structure at atomic resolution from sequence alone remains a difficult problem, even for small sub-segments of large proteins. Such loop prediction challenges, which arise frequently in comparative modeling and protein design, can become intractable as loop lengths exceed 10 residues and if surrounding side-chain conformations are erased. Current approaches, such as the protein local optimization protocol or kinematic inversion closure (KIC) Monte Carlo, involve stages that coarse-grain proteins, simplifying modeling but precluding a systematic search of all-atom configurations. This article introduces an alternative modeling strategy based on a \u2018stepwise ansatz\u2019, recently developed for RNA modeling, which posits that any realistic all-atom molecular conformation can be built up by residue-by-residue stepwise enumeration. When harnessed to a dynamic-programming-like recursion in the Rosetta framework, the resulting stepwise assembly (SWA) protocol enables enumerative sampling of a 12 residue loop at a significant but achievable cost of thousands of CPU-hours. In a previously established benchmark, SWA recovers crystallographic conformations with sub-Angstrom accuracy for 19 of 20 loops, compared to 14 of 20 by KIC modeling with a comparable expenditure of computational power. Furthermore, SWA gives high accuracy results on an additional set of 15 loops highlighted in the biological literature for their irregularity or unusual length. Successes include The method has furthermore been successful in blind tests Recently, a conceptually distinct approach to modeling macromolecule structure has arisen from efforts to predict complex RNA structures in all-atom detail Given its advantages over prior RNA modeling approaches and its assurance of complete sampling, stepwise assembly also holds promise for difficult problems in protein structure prediction. This study presents the first application of SWA to proteins, focusing on loop modeling. Protein loop modeling problems arise frequently in comparative modeling, designing new proteins, and solving or refining protein folds with limited crystallographic or NMR data, including weakly populated (\u2018invisible\u2019) states cis-Pro touch turns, loops that thread through tunnels formed by the surrounding protein, and segments of unprecedented length. These cases are solved ab initio by SWA with sub-Angstrom accuracy, albeit at the expense of thousands of CPU-hours per loop. Additional atomic-accuracy results from five blind predictions, including a comparative modeling problem for a protein/RNA complex, give further support to the stepwise ansatz and its Rosetta SWA implementation. Analogous to recent successes of RNA modeling in structural biology To address the conformational sampling bottleneck, this article describes the import of stepwise assembly from RNA modeling to protein loop structure prediction in the Rosetta framework . The res1oyc), an unsolved case in recent studies NHB in 1oyc case). Furthermore, loops make few interactions with the non-polar core, arguably the best modeled region of protein structures 1oyc loop residues , which uses backbone-only hierarchical loop build-up followed by molecular mechanics force field refinement cis proline sampling, and more stringent chain closure. Even in these more extensive calculations , crystallographically observed combinations, and closed loop geometries, sampling each amino acid at sub-Angstrom resolution requires at least tens of backbone conformers (see SI Methods), resulting in at least 1012 backbone torsional combinations for a 12-residue loop. Subsequent side-chain optimization on these backbones would require tens of millions of CPU-hours Most current loop modeling approaches that seek atomic resolution share a seemingly necessary working approximation: an initial search phase using a reduced representation with simplified or no side-chain atoms. Such coarse search phases avoid the complexity and large number of local minima inherent to all-atom representations, but fail to capture non-polar packing interactions and hydrogen bonds involving side-chain atoms, which are pervasive . This protocol has been coded into the Rosetta framework as a stepwise assembly (SWA) algorithm.Recent work in three-dimensional RNA modeling 1oyc loop. For the first five residues implementation was used to carry out structure prediction on a benchmark set of thirty-five protein loops. Twenty of these cases were 12-residue loops used previously to test PLOP and Rosetta approaches First, examining the subset of twenty loops used in prior PLOP and Rosetta studies permitted direct comparison of SWA to these state-of-the-art methods see also . Here, tcis prolines, and reported RMSD for the very lowest energy model.] The high accuracy SWA predictions that were intractable to previous PLOP and/or Rosetta approaches included the 1oyc case described above and 0.89 \u00c5 (best of five lowest energy structures). SWA modeling again outperformed KIC modeling overall, although not by as much as in the first 20-loop benchmark [1.9 \u00c5 (lowest energy) and 0.94 \u00c5 (best of five)]. In 8 of these 15 complex loop tests, SWA returned at least one of five lowest energy models with sub-Angstrom accuracy . High-re12\u2013fold more accessible conformations). However, the stepwise ansatz underlying the SWA method constrains sampling to a subspace that requires only 4-fold more steps to search. For three of the six cases with lengths greater than or equal to 18 residues, the SWA method achieved sub-Angstrom accuracy. Modeling with such accuracy included two 24-residue cases. One involved a mixture of irregular, helix, and strand segments in a bacteriophage head protein . Interestingly, in two of these cases (1arp and 1huw), KIC modeling outperformed SWA modeling in terms of RMSD but gave significantly worse Rosetta energies and not released outside their research group. The closest previously solved structure exhibited low sequence identity to the target ; and analogs of the loop regions either did not exist (loop A) or were different in sequence at all 12 positions (loop B) or at 10 positions (loops C and D) in the homologous structure. The Weis group provided a starting structure with all of these loops and all side-chains removed, and ur cases , includiur cases . In threur cases .2fc3] based on threading with HHPRED and Rosetta 3v7e; As a fifth blind test, SWA models were generated for a loop of protein YbxF This article has presented a strategy for protein structure prediction that achieves atomic accuracy on the majority of loop modeling targets through a systematic all-atom enumeration. Several of these targets were difficult or intractable with prior approaches, despite mainly involving loops excised from crystallographic models, the simplest such puzzles. The main innovation herein is a stepwise ansatz imported from RNA structure modeling. This working hypothesis posits that realistic loop structures are reachable via the residue-by-residue building of partial conformations that are themselves well-stabilized by precise hydrogen bonds and non-polar packing interactions. This ansatz underlies a Rosetta stepwise assembly (SWA) protocol and is supported by tests of the SWA algorithm on forty loop puzzles, including twenty shared with prior loop modeling benchmarks, fifteen more difficult loop cases, and five blind tests. In the majority of cases (32 of 40), including loop puzzles of unprecedented length, all the blind tests, and a comparative model , the SWA method achieved sub-Angstrom accuracy.The stepwise assembly protocol is novel in protein modeling studies: while prior efforts have proposed the build-up of short peptides or lattice models An analogy to lock-picking helps clarify the strengths and limitations of the stepwise assembly method described herein. The main innovation of the protein SWA method is the enumerative sampling of individual residue conformations, which helps guarantee precise fit of the residues' atoms into the surrounding environment. This scenario is a kind of \u2018lock-and-key\u2019 problem but differs from the previous RNA SWA method To unlock a tumbler lock, one \u2018enumeratively samples\u2019 each pin conformation through manipulation with a probing pick, until the boundaries within the pin doublet are aligned with the lock's cylindrical surface . The pinThis analogy clarifies the importance of precise fits between the loop and its surroundings for successful modeling; if such a fit is not possible, the current SWA implementation will give inaccurate solutions. In particular, SWA will have difficulty in problems where the given protein backbone (outside the loop) deviates from its actual conformation. Such scenarios are encountered in comparative modeling and protein design applications. Cases like the community-wide blind trial YbxF above demonstrN residues, the formal size of the conformational space scales exponentially with N; the actual experimental folding times of proteins do not scale so poorly, implying the general existence of folding intermediates or pathways rather than a random walk search N. The resulting efficiency permitted the atomic resolution recovery herein of loops with lengths up to 24 residues, whereas prior work tackled loops no longer than 12 residues tool for uncovering deficiencies in the Rosetta energy function, as in the solvent-exposed loops described above was implemented in C++ in the Rosetta codebase and is available in Rosetta release 3.6, free to academic users at de novo from residue k to residue l, each stage of stepwise assembly involved creating models of the loop with an N-terminal fragment built forward from k\u20131 to residue i and a C-terminal fragment built backward from residue l+1 to j. Each stage could thus be indexed with the two residue positions . The SWA calculation proceeded recursively from stages with short fragments built into the structure towards models with longer fragments, i.e., i increasing from k\u20131 or j decreasing from l+1. This building corresponds to movement from the top-right to the bottom or left, respectively, in A diagram of the entire stepwise assembly (SWA) calculation is given as a directed acyclic graph (DAG) laid out in the style of a dynamic programming matrix in k\u20131, l+1) of the DAG in packer. This pack optimized the rotamers using simulated annealing, after precomputing pairwise energies between all potential side-chain rotamers minimizerThe first step was a \u2018pre-packing\u2019 of the side-chains of the starting model with no loop atoms, corresponding to the top-right corner to (downward arrows in i and \u03c8i) and the previous, adjacent residue , permitting the discovery of configurations in which the dipeptide segment is stabilized by interactions by the new residue without requiring interactions at the previous residue. . For each \u03c6 or \u03c8, the sampling was a grid search from \u2013180\u00b0 to 180\u00b0 in 20\u00b0 increments. To keep only sterically realistic backbones, configurations in which a residue's gave Rosetta ramachandran score greater than 0.8 Rosetta units were discarded. The \u03c9 torsion was assumed to be 180\u00b0 (trans configuration), except for residues that preceded prolines, which were also sampled at 0\u00b0 (cis). The number of backbone combinations varied from tens to several thousand (for segments involving glycine residues and/or residues that preceded proline).The core computation in SWA is the addition of a new residue to a model and enumeration of its backbone conformations. For additions to the i and i\u20131) and their potential neighbors . The side-chain sampling was carried out with the Rosetta rotamer_trials algorithm. The searched side-chain rotamers included those listed in the backbone-dependent Rosetta rotamer library as well as additional rotamers with \u03c71 and \u03c72 shifted by \u00b11 standard deviation from the standard rotamer values. The discreteness of the backbone grid search and rotamer library can penalize favorable side-chain interactions due to minor clashes or slightly imperfect hydrogen bonds. Therefore, the energy function for side-chain optimization was modified from the current standard Rosetta all-atom energy function (score12) to include a lower weight on fa_rep , a higher weight on hbond_sc , and no attenuation of hydrogen bond strength at solvent-exposed residues For each combination of backbone torsion angles, the side-chains of the loop and its surroundings were optimized. Analogous to calculations in protein-protein docking i\u20131 and i . If the RMSD value to any lower energy clusters was less than a fine cutoff (0.10 \u00c5), the model was considered too close to an existing representative and discarded; otherwise the model seeded a new cluster. The lowest energy 400 models after clustering were carried forward to minimization.After enumerative backbone sampling and side-chain optimization, models were clustered as follows. In order of energy, starting with the lowest energy model, the RMSD of each model to all lower energy clusters was computed; this RMSD value was calculated over N, C, C\u03b1, and O atoms at the rebuilt residues minimizerscore12. The models were clustered as described above, and saved to disk.Minimization involved backbone torsions at the sampled residue and torsions \u03c7 for all neighboring side-chains . This torsional optimization was performed with the non-monotone Armijo variant of BFGS minimization in the Rosetta j to the C-terminal loop fragment where 1< (j\u2013I\u20131) <3].For the final stages of SWA assembly, N-terminal and C-terminal fragments were bridged to form continuous loops with ideal backbone bond lengths and angles , and thei+1 was appended to the N-terminal fragment, and other gap residues i+2 to j\u20131 were prepended to the C-terminal fragment. The \u03c6 and \u03c8 torsion angles of the first gap residue i+1 were sampled by grid search as above; and, to attempt chain closure, backbone torsions for \u2018bridge\u2019 residues i+2 up to j\u20131 were subjected to 1000 cycles of cyclic coordinate descent , leading to hundreds of thousands of models. Even larger numbers were generated at the final stage of full-length loop modeling due to the many routes to chain closure. However, these models typically spanned a very large range of energies, and SWA seeks to carry forward only the lowest-energy configurations at each rebuild stage. Thus all models for a given stage were collated, filtered to retain the 4000 lowest energy models, and then reclustered. The clustering followed the procedure described above, except that RMSDs were calculated over the entire rebuilt loop fragments and a clustering RMSD threshold of 0.25 \u00c5 was applied. The 400 lowest energy configurations were carried forward. In the final stage (full-length loop models), models were re-clustered with RMSD threshold 1.0 \u00c5, and the five lowest energy models were taken as the SWA predictions.For a given build-up stage (l+1) is coupled to the loop conformation through the torsion \u03c6l+1. This degree of freedom was not sampled, as \u2018native\u2019 backbone hydrogen atoms were present in the benchmark starting structures. However, for the five blind tests, the hydrogens were not available a priori; they were initially placed in the starting excised structure with Reduce l+1 was not guaranteed to be in its \u2018native\u2019 position, \u03c6l+1 was sampled during build-up of residues l\u20131 and l.In the SWA runs for this study's 35-loop benchmark, some settings in the loop modeling were chosen so as to match prior benchmarks. First, for proteins containing disulfide bonds, these residue-residue pairings were assumed to be known [as in prior work N) with the number of residues N, rather than as O (N2), and is analogous to the recursion used previously for RNA loop modeling N) calculation was carried out first. If the energy gap between the lowest energy model and the second lowest energy model was less than 1 kBT, the calculation was assumed to not have clearly converged on a confident model, and the loop building was repeated with the full O (N2) calculation, except the very long 1rhd and 7cat test cases. Overall, 18 of 40 cases were modeled with the O (N) calculation that some cases could be solved without carrying out the full SWA dynamic programming matrix, but instead by building from the N-terminal side , by building in separate runs from the C-terminal side , and then combining these separate solutions with chain closure to attain final models. This simplified calculation steps with the number of loop residues N (see above) are setup with the same swa_protein_dagman.py script above, but without the last flag \u2013loop_force_Nsquared. For blind tests, the \u03c8 torsion for the starting loop residue and \u03c6 torsion for the ending loop residue were sampled (see above); this was accomplished by omitting the flag -disable_sampling_of_loop_takeoff.Simplified calculations that take O and for post-processing all models available for a given stage by collation into single files and removal of unused files, are available in tools/SWA_protein_python/.Each of the build steps described in protein_build.dag corresponds to a single command line using the Rosetta executable swa_protein_main.An example command-line for pre-packing the 1OYC loop modeling case is:swa_protein_main. -database -rebuild -out:file:silent_struct_type binary -fasta 1oyc.fasta -n_sample 18 -nstruct 400 -cluster:radius 0.100 -extrachi_cutoff 0 -ex1 -ex2 -score:weights score12.wts -pack_weights pack_no_hb_env_dep.wts -in:detect_disulf false -add_peptide_plane -native 1oyc_min.pdb -superimpose_res 1-202 215-399 -fixed_res 1-202 215-399 -calc_rms_res 203-214 -jump_res 1 399 -disable_sampling_of_loop_takeoff -mute all -s1 noloop_1oyc_min.pdb -input_res1 1-202 215-399 -use_packer_instead_of_rotamer_trials -out:file:silent REGION_215_202/START_FROM_START_PDB/region_215_202_sample.outAn example command line that builds residue 206 onto the end of a N-terminal fragment already containing 203\u2013205 is:swa_protein_main. -database -rebuild -out:file:silent_struct_type binary -fasta 1oyc.fasta -n_sample 18 -nstruct 400 -cluster:radius 0.100 -extrachi_cutoff 0 -ex1 -ex2 -score:weights score12.wts -pack_weights pack_no_hb_env_dep.wts -in:detect_disulf false -add_peptide_plane -native 1oyc_min.pdb -superimpose_res 1-202 215-399 -fixed_res 1-202 215-399 -calc_rms_res 203-214 -jump_res 1 399 -disable_sampling_of_loop_takeoff -mute all -silent1 region_215_205_sample.cluster.out -tags1 S_0 -input_res1 1-205 215-399 -sample_res 205 206 -out:file:silent REGION_215_206/START_FROM_REGION_215_205_DENOVO_S_0/region_215_206_sample.outHere, the build is onto the lowest energy model (S_0) available from a previous stage that had rebuilt residues 203\u2013205 from the N-terminal end.An example command line that builds residue 209 onto the N-terminal end of a C-terminal fragment already containing residues 210\u2013214:swa_protein_main. -database -rebuild -out:file:silent_struct_type binary -fasta 1oyc.fasta -n_sample 18 -nstruct 400 -cluster:radius 0.100 -extrachi_cutoff 0 -ex1 -ex2 -score:weights score12.wts -pack_weights pack_no_hb_env_dep.wts -in:detect_disulf false -add_peptide_plane -native 1oyc_min.pdb -superimpose_res 1-202 215-399 -fixed_res 1-202 215-399 -calc_rms_res 203-214 -jump_res 1 399 -disable_sampling_of_loop_takeoff -mute all -silent1 region_210_202_sample.cluster.out -tags1 S_2 -input_res1 1-202 210-399 -sample_res 209 210 -out:file:silent REGION_209_202/START_FROM_REGION_210_202_DENOVO_S_2/region_209_202_sample.outHere, the build is onto the third lowest energy model (S_2) available from a previous stage that had rebuilt residues 210-214 from the C-terminal end.An example command line that takes the lowest energy model from the ensemble that has built up both the N-terminal fragment 203\u2013206 and C-terminal fragment 209\u2013215, samples backbone degrees of freedom at residue 207, and closes the chain by cyclic coordinate descent (CCD) across residues 207 and 208:swa_protein_main. -database -rebuild -out:file:silent_struct_type binary -fasta 1oyc.fasta -n_sample 18 -nstruct 400 -cluster:radius 0.100 -extrachi_cutoff 0 -ex1 -ex2 -score:weights score12.wts -pack_weights pack_no_hb_env_dep.wts -in:detect_disulf false -add_peptide_plane -native 1oyc_min.pdb -superimpose_res 1-202 215-399 -fixed_res 1-202 215-399 -calc_rms_res 203-214 -jump_res 1 399 -disable_sampling_of_loop_takeoff -mute all -silent1 region_209_206_sample.cluster.out -tags1 S_0 -input_res1 1-206 209-399 -sample_res 208 -bridge_res 207 -cutpoint_closed 207 -ccd_close -global_optimize -out:file:silent REGION_207_206/START_FROM_REGION_209_206_CLOSE_LOOP_CCD_S_0/region_207_206_sample.outAn example command line that combines the lowest energy model for N-terminal fragment 203\u2013206 with every model for C-terminal fragment 209\u2013215 and carries out CCD (cyclic coordinate descent) chain closure across 207 and 208:swa_protein_main. \u2013database -rebuild -out:file:silent_struct_type binary -fasta 1oyc.fasta -n_sample 18 -nstruct 400 -cluster:radius 0.100 -extrachi_cutoff 0 -ex1 -ex2 -score:weights score12.wts -pack_weights pack_no_hb_env_dep.wts -in:detect_disulf false -add_peptide_plane -native 1oyc_min.pdb -superimpose_res 1-202 215-399 -fixed_res 1-202 215-399 -calc_rms_res 203-214 -jump_res 1 399 -disable_sampling_of_loop_takeoff -mute all -silent1 region_209_202_sample.cluster.out -tags1 S_0 -input_res1 1-202 209-399 -silent2 region_215_206_sample.cluster.out -input_res2 1-206 215-399 -bridge_res 207 208 -cutpoint_closed 206 -ccd_close -global_optimize -out:file:silent REGION_207_206/START_FROM_REGION_209_202_REGION_215_206_CLOSE_LOOP_CCD_S_0/region_207_206_sample.outAn example command line for clustering the lowest energy 4000 models available for the N-terminal fragment 203\u2013205:swa_protein_main. -cluster_test -silent_read_through_errors -in:file:silent REGION_215_205/start_from_region_215_204_denovo_sample.low4000.out -in:file:silent_struct_type binary -database -cluster:radius 0.25 -calc_rms_res 203-214 -out:file:silent region_215_205_sample.cluster.out -nstruct 400 -score_diff_cut 10.000 -working_res 1-205 215-399de novo models, with the same bond lengths and angles as used in the modeling.To help assess efficiency of conformational sampling, the all-atom Rosetta energies of crystallographic loops were obtained through two strategies. Generally, crystallographic loops contain minor steric clashes that are penalized by the Rosetta energy function, and these conformations need to be subjected to local optimization to permit comparison to idealize application), and the resulting idealized loop conformation was grafted into the same side-chain pre-packed structure as used in the SWA runs above. The loop and all its neighbors were subjected to combinatorial optimization by the packer and then all loop torsions and all side-chain torsions in the loop and surrounding residues were subjected to continuous minimization as above. As above, keeping the backbone outside the loop residue rigorously fixed requires a formal chainbreak within the loop, which remains closed during minimization due to the linear_chainbreak term. For each potential chainbreak location , 20 runs were carried out. The command line used was the following:The first \u2018idealize-and-optimize\u2019 strategy mimicked that from ref. swa_protein_main. \u2013database -rebuild -out:file:silent_struct_type binary -fasta 1oyc.fasta -n_sample 18 -nstruct 400 -extrachi_cutoff 0 -ex1 -ex2 -score:weights score12.wts -pack_weights pack_no_hb_env_dep.wts -in:detect_disulf false -add_peptide_plane -native 1oyc_min.pdb -superimpose_res 1-202 215-399 -fixed_res 1-202 215-399 -calc_rms_res 203-214 -jump_res 1 399 -disable_sampling_of_loop_takeoff -silent1 region_215_202_sample.cluster.out -tags S_0 -input_res1 1-202 215-399 -cutpoint_closed 214 -global_optimize -out:file:silent MINIMIZE_NATIVE/12/1oyc_minimize_native.out -cluster:radius 0.0 -s2 1oyc_min_idealize.pdb -input_res2 202-215 -slice_res2 202-215A second \u2018native SWA\u2019 strategy was used to optimize the loop conformation around the crystallographic loop. In this strategy, the entire SWA calculation after the initial side-chain prepacking was repeated, but at each sampling step, models were only carried forward if their backbone RMSD to the crystallographic loop was less than 2.0 \u00c5. In addition, Rosetta coordinate constraints at loop C\u03b1 atoms were implemented with the following Python script command line:generate_CA_constraints.py 1oyc.pdb \u2013cst_res 203-214 \u2013coord_cst \u2013anchor_res 1 \u2013fade > 1oyc_coordinate2.0.cstThe script is available in tools/SWA_protein_python/. These constraints applied a penalty for each C\u03b1 atom deviating further than 2.0 \u00c5 from the crystallographic position, rising to a maximum of 10.0 kBT for deviations of 4.0 \u00c5; the functional form was a cubic spline with zero derivative at 2.0 \u00c5 and 4.0 \u00c5 (the fade function in Rosetta). These constraints were activated in SWA runs by including flags -rmsd_screen 2.0 and -cst_file 1oyc_coordinate2.0.cst in swa_protein_main command lines above. In all tested loops, the SWA-native strategy gave models within 1.0 \u00c5 C\u03b1 RMSD to the crystallographic loop with lower energies than the idealize-and-optimize strategy ; the SWA-native values are thus reported in To permit comparison of the SWA approach to a prior state-of-the-art method, the KIC (kinematic closure) loop modeling method in Rosetta was repeated on the 20-protein PLOP/Rosetta benchmark. The following command line was used:loopmodel. -database -loops:remodel perturb_kic -loops:refine refine_kic -loops:input_pdb region_FINAL.out.1.pdb -in:file:native 1oyc_min.pdb -loops:loop_file 1oyc.loop -loops:max_kic_build_attempts 10000 -in:file:fullatom -out:file:fullatom -out:prefix 1oyc -out:pdb -ex1 -ex2 -ex1aro -extrachi_cutoff 0 -out:nstruct 1000 -out:file:silent_struct_type binary -out:file:silent 1oyc_kic.out -fix_ca_bond_angles -kic_use_linear_chainbreak -allow_omega_move -sample_omega_at_pre_prolines.The command line is identical to that used previously, except for some additional terms to ensure a fair comparison to the SWA modeling above. The flag -fix_ca_bond_angles retains N\u2013C\u03b1\u2013C bond angles at ideal values defined by Rosetta; sampling these angles did not improve accuracy in prior work The KIC loop modeling method requires an input starting structure with a loop pre-built, and this loop defines the fixed bond lengths and angles used in the run. Rather than using the crystallographic loops Figure S1Energy vs. RMSD plots at intermediate stages of stepwise assembly build-up. Build-up in panels (a) to (k) corresponds to residue-by-residue path in 1oyc loop (residues 203\u2013214). RMSD over all backbone heavy-atoms is shown on x-axis, using the corresponding loop fragments in the crystallographic loop as a reference. In each panel, symbol with black outline marks the specific model that eventually leads to the final lowest energy model in (k).(EPS)Click here for additional data file.Figure S2Energy vs. RMSD summaries of modeling runs for 20-loop PLOP/Rosetta benchmark. Rosetta all-atom energy values and loop C\u03b1 RMSDs are plotted for models from kinematic closure Monte Carlo ; stepwise assembly with O(N) simple calculation (red); stepwise assembly with full O(N2) build-up (pink); crystallographic loops optimized by SWA re-building with constraints (blue); and crystallographic loops optimized by idealizing, re-packing, and continuous minimization (green).(ZIP)Click here for additional data file.Figure S3Energy vs. RMSD summaries of modeling runs for 15 difficult and 5 blind test cases. Rosetta all-atom energy values and loop C\u03b1 RMSDs are plotted for models from models from kinematic closure Monte Carlo ; stepwise assembly with O(N) simple calculation (red); stepwise assembly with full O (N2) build-up (pink); crystallographic loops optimized by SWA re-building with constraints (blue); and crystallographic loops optimized by idealizing, re-packing, and continuous minimization (green). For 3v7e , optimized crystallographic energies are not presented, since loop building was carried out on a comparative model; and energies between KIC and SWA cannot be compared as RNA was not included in the former case.(ZIP)Click here for additional data file.Table S1Comparison of all loop modeling methods in 20-residue PLOP/Rosetta benchmark.(PDF)Click here for additional data file.Table S2Energy comparisons to determine convergence and conformational sampling efficiency.(PDF)Click here for additional data file."} +{"text": "It presently has 115 members. An interesting recent interaction is presented herewith.] serves science well, it does not serve philosophy well at all, for it is not forwarded in its line of enquiry in a fruitful way.But per se. I admire it as the highest form of intellectual enquiry. But it will not do for this highest form to look askance at the obvious that empirical science is producing.Richard, I have no quibbles with philosophy Let me put it a nutshell. Let\u2019s try and answer these questionsPhilosophers are mainly discussing brain functions. How many of them have even a working knowledge of its structure - neuroanatomy, neurophysiology; leave alone its pathology?Tell me why is there no entry \u2018brain\u2019 in any encyclopaedia or dictionary of philosophy? [Correct me if I am wrong.]http://en.wikipedia.org/wiki/Brain] and more specifically \u2018Human Brain\u2019 [see http://en.wikipedia.org/wiki/Human_brain]. There are concepts here that may appear technical, but one can at least start; and what one doesn\u2019t understand one can ask from scientific colleagues in the field of brain biology.How many have even tried to read the entry \u2018brain\u2019 in an encyclopaedia Tell me why is there no entry \u2018brain\u2019 in any encyclopaedia or dictionary of philosophy? [Correct me if I am wrong.][Have you found any that has an entry?]http://en.wikipedia.org/wiki/Brain] and more specifically \u2018Human Brain\u2019 [see http://en.wikipedia.org/wiki/Human_brain]. There are concepts here that may appear technical, but one can at least start; and what one doesn\u2019t understand one can ask from scientific colleagues in the field of brain biology.How many [philosophers] have even tried to read the entry \u2018brain\u2019 in an encyclopaedia Care to opine?Have you made a New Year resolution you will study the brain?Happy New Year.Ajai31 Dec 2010__________________________________________________________________________________________________________________________ wrote:On Fri, 31/12/10, Richard Godwin Sorry, but I still think you don\u2019t make your point. Often discussions of brain functions are accompanied with brain areas or regions, such as primarily the pre-frontal cortex. But if not, then what difference does it make? The brain functions in such and such a way; what difference does it make to point out the pre-frontal cortex and associated regions, such as the amygdala and gray matter? Why should philosophers discuss the anatomy of the brain? If they don\u2019t discuss brain structure, etc., that doesn\u2019t mean they have no knowledge of it, or that they can\u2019t answer questions specific to it. Pathology is important and is mentioned when appropriate, such as the cognitive effects of brain injuries.Almost all philosophers teach and write on \u201cphilosophy of mind.\u201d That covers the brain, especially for physicalists. As to an encyclopaedia, why not look under \u201cmind\u201d, an overall subject including the brain? Or \u201cbiology\u201d, or \u201cphysical\u201d?Perhaps you can explain why any discussion of brain functions requires knowledge of the brain\u2019s structure.Richard.__________________________________________________________________________________________________________________________From: Ajai Singh To: Mind_Brain_Consciousness@yahoogroups.comSent: Thursday, December 30, 2010 5:32 PMI think I replied to that, Richard.\u2018One may say one can very well do thinking without understanding the structure that causes it. I can very well drive a car without understanding its structure. But if I am describing the functions of an entity, I have to know its strucure as well, or at least have a working knowledge. If I am describing the functions of a car, I cannot claim any level of expertise unless I have at least a working knowledge of its structure.about thinking. Hence, they fall in both categories.\u2018Philosophers are not only thinking, they are also talking That is why I am urging my philosopher friends of the second category to get a working knowledge of the brain; and include that in their syllabi, their dictionaries and their encyclopaedias.\u2019Happy New Year.Ajai1 Jan 2011.__________________________________________________________________________________________________________________________From Richard Godwin: meta@rraz.net Date: Sat, 1 Jan 2011 10:12:01 -0700Your car analogy doesn\u2019t work quite well enough. With the car there are so many completely different elements, providing completely different functions (such as \u201cengine\u201d and \u201cwheel\u201d), that the analogy breaks down. There are several identified regions in the brain, and they work neuronal networks through chemical reactions toward any given function. I think philosophers do recognize different regions in the brain, but without scientific knowledge of precisely how those networks perform a task through which regions. But why should that matter? Simply the brain functions through networks of neurons with chemical impulses in different regions of the brain. Shouldn\u2019t that be sufficient for the task of philosophy?Expertise in what? If you mean philosophers should be brain experts, then you might be right. But that is the provenance of science, not philosophy. You want a philosopher also to be a scientist, right? Why?Give us an example how poor philosophical reasoning is caused by lack of knowledge of the precise structure of the brain.I\u2019m just trying to get at the root of your problem. So far, simply you have not made your case.Thank you,Richard.__________________________________________________________________________________________________________________________ wrote:On Sun, 2/1/11, veena garyali I have been following this discussion with great interest. I am not a philosopher but do read a reasonable amount. I tend to agree with Richard. I really do not understand why Ajai is insisting that a philosopher should know the details of brain structure. The fact is that the essential importance of the brain is a given. There can be no thought, feeling or discussion without the brain and that too the human brain. Without being said it is understood. Philosophers focus on what comes after that- different world view, the meaning attributed to certain things, and so much else. Do they need to stipulate in the beginning that they know about the existence of the brain? It is like when you go to testify in the court both attorneys generally agree on the expert\u2019s basic qualifications and they stipulate it without going into details. I am by no means an expert and may be I am missing some deeper meaning, but I fail to see the connection.Veena__________________________________________________________________________________________________________________________ Sunday, 2 January, 2011 20:34 wrote:Ajai Singh Why should philosophers study the brain?I am happy Richard and Veena are unhappy with what is presented until now.Because that lets me proceed with a few observations to clarify why I suggested what I did to my philosophers friends.Let us study a few concepts that have engaged philosophers. To put them in simple terms:Mind-body dualism states the mind is separate from the body.Descartes is credited with stating that mind and body, or consciousness and stuff, interact in the pineal gland.In the mind-body problem, the idealist view is that the mind alone is real.In the problem of personal identity, questions asked are: can a mind animate several bodies? Can several minds animate one body? Can mind exist without a body at all?In Indian thought, manas [roughly translated as mind] is regarded as an internal sense organ.And so on, and so forth.If one has a working knowledge of the brain structure and related functions, one realises that, with regard to each of the points above:Mind is a function of the brain, and brain is not at all separate from the body.The pineal gland is not responsible for any such interaction between physical and mental activity, or mind and body.The idealist view that mind alone is real is really not talking of mind as a function of the brain at all, but as some metaphysical entity. Because if they were to study the brain and its structure and related functions, they would immediately realise that the thought, \u2018The mind alone is real\u2019 is itself a product of their functioning brain, and they would rather say, \u2018My brain functions are surely real, even if there may be doubt whether the external world is real or not\u2019. If their brain were not functioning, they would not come to either conclusion, for whatever they are worth.What are we talking of as \u2018mind\u2019 here? If even an elementary study of brain is done, and if we accept mind to be a collection of brain functions, we would understand the ridiculous nature of these reflections with mind as the focus - the reflection is not at fault, the entity \u2018mind\u2019 as the focus is the culprit.Can a mind animate several bodies, or several minds animate one body? Manas or Mind as an internal sense organ in earlier Indian thought. An organ? If so, it must be present somewhere in the body? One thinks they meant the brain, which was understandably presented as an \u2018internal sense organ\u2019, given the extant nature of their understanding then.A clear understanding of brain structure and related functions makes many of these so called profound problems just evaporate into thin air. And saves precious energies to further more substantial enquiries in the field.That is why I recommend greater need to study the brain in all its nuances - not just its gross structure, but its function, its neurophysiology, its neurochemistry.In fact, if a philosopher/s were to sincerely study all these, and also empathetically study the different reflections on \u2018mind\u2019 and \u2018consciousness\u2019 that philosophers down the centuries have given, in the East and the West, and honestly co-relate them, he would be able to present the most comprehensive understanding of the topic, maybe even a final grand theory that settles matters, once and for all.Also, if a philosopher-scientist were to study and grasp the workings of the brain, and top it up with a comprehensive understanding of the great mass of knowledge that the great masters of thought - philosophers of the East and West - have bequeathed humanity, and co-relate them in a comprehensive manner, a similar grand theory that settles matters would result.When I urge my philosopher friends to study the brain, I do so because I believe they have greater potential to give such a theory. If they continue to live in denial, the scientists are on course, and may pip them to the post.And, of course, if a scientist-philosopher takes it up, and can get over his denial of philosophy because of their often \u2018amorphous\u2019 formulations, he may beat both the philosophers and the scientists to the post too.That is the game, that is the final goal, friends. We are playing for really large stakes.That is why I suggest philosophers plug that loophole in their study - neglecting the brain.And that is why I also now suggest scientists plug their loophole - neglecting the philosophy of mind.And if the two can synergise efforts, the goal can be achieved in half the time.Ready?Happy New Year once again. And kindly once again pardon the long post.Ajai2 Jan 2011Ajai Singh,mensanamonographs@yahoo.co.ukE-mail:"} +{"text": "There was an error in the accession number in Materials and Methods.EBV and KSHV dataset can be found under the Study Accession number: ERP001026.Individual Sample Accessions are as follows:1208_JSC1_SN_KS - ERS0748231212_HBL6_SN_KS - ERS0748241210_JSC1_SN_EB - ERS0748251215_HBL6_SN_EB - ERS074833"} +{"text": "An evaluation of the Bruker SMART X2S for the collection of crystallographic diffraction data, structure solution and refinement is carried out with a variety of materials with different electron densities, presenting some of the successes and challenges of automation in chemical crystallography. An evaluation of the Bruker SMART X2S for the collection of crystallographic diffraction data, structure solution and refinement is carried out with a variety of materials with different electron densities, presenting some of the successes and challenges of automation in chemical crystallography. For the third experiment, the crystal is sufficiently far from the centre of the mount that it is precessing in and out of the beam. Thus, the symmetry-equivalent reflections do not match, the Laue check fails and the larger centred unit cell is not identified. The checkCIF output highlights the missed symmetry, as well as the high R int and final R values, and should alert the nonspecialist to the fact that there is a problem.The first two experiments have similar data, both of which are perfectly acceptable for publication: a slight increase in In summary, the large beam size means that alignment is not as critically important as on other instruments, particularly in the horizontal direction, although for short data collection times and good quality data, it is still important.3.N-cyano-S-benzyl-S-(2-fluorophenyl)sulfilimine, (2), synthesized , (4) and (11) are novel (see the supplementary information for their synthesis). Literature methods were used for the synthesis of (5) . The polarity of compound (9), as evidenced by the Flack \u2013(14) an incorrect structure was obtained. The data quality seems fine, with no evidence of twinning or disorder, so what types of issues have occurred? The errors involve differentiation between atoms that differ by one electron, e.g. N and O atoms are reversed in (12) .Compounds (2)\u2013(14) are anhydrous samples, without any solvent present. The effectiveness of the system for crystals containing more than one compound was investigated: a hydrate, a solvate and a cocrystal, (15)\u2013(17) and (17) were assigned correctly. For (16), a C and an N atom were misassigned, as discussed in \u00a77.13H10OS, was incorrectly input as the sulfoxide, C13H10O2S, and (10), C15H12N2O, was input as C27H22N2O2S. The hydrate (16) and solvate (15) were input as the pure material. The software is robust and coped with an incorrect molecular formula in the majority of cases, with 80% of samples obtaining the correct structure.The effect of inputting the incorrect molecular formula was investigated, since it is necessary to input a formula at the start of the experiment, and in some cases the identity of the crystal may not be known. For example, (5), CThere is one issue that causes inconvenience. The system does not update the CIF and report files with the molecular formula based on the structure obtained, but instead uses the formula input by the user. This requires manual refinement for cases where the submitted formula is different from the structure obtained.8.K\u03b1 source for (2), (8), (11), (12), (16) and (18), which include samples of both good and poor crystal quality. The synthesis of (18), 4-methyl-N-phenylbenzenesulfonamide, has been described by Massah et al. etc. In our experience, those users who are familiar with the checkCIF output after an experienced crystallographer has finalized a crystal structure are asking more questions about the checkCIF output they obtain from the SMART X2S. Novice users are also asking similar questions and some of these questions are about the technique itself. This is a major advantage of the output from the instrument, in that it does seem to be increasing awareness of crystallography among the synthetic chemists.The checkCIF output allows fast diagnosis of any issues in the experiment. Inputting an incorrect formula at the start of the experiment, for example, will immediately become obvious from the checkCIF output because of differences in formula, density APEX2 software suite than the rest of the data set (<5), owing to the beam stop blocking or partially blocking the correct measurement of these reflections in some orientations. This is not unusual for chemical crystallography and omitting these from the latter cycles of refinement would be useful, although it may be better not to do this in a routine manner.In some experiments, In summary, the SMART X2S as an instrument has allowed chemists with no crystallography experience to obtain crystallographic data for novel compounds. The instrument has greatly increased the use of crystallography in the department, with little training required to operate a user-friendly and easy-to-use instrument.supplementary information and have been deposited with the Cambridge Crystallographic Data Centre (CCDC) for the novel crystals (2)\u2013(8), (11) and (14)\u2013(18). [The following computer programs were used in the refinement: APEX2, GIS, SADABS and SAINT (Bruker (2009SHELXS97 and SHELXL97 (Sheldrick, 2008PLATON (Spek, 2009The CIF data (Hall & McMahon, 2006uker 2009, SHELXS910.1107/S0021889810042561/kk5074sup1.cif Crystal structure: contains datablocks Compound_1_Below, Compound_1_Centre, Compound_1_Side, Compound_2_APEX, Compound_2_CCDC, Compound_2_Run_1, Compound_2_Run_2, Compound_2_Run_3, Compound_2_Run_4, Compound_3_CCDC, Compound_3_X2S, Compound_4_CCDC, Compound_4_X2S, Compound_5_CCDC, Compound_5_X2S, Compound_6_CCDC, Compound_6_X2S, Compound_7_CCDC, Compound_7_X2S, Compound_8_APEX, Compound_8_CCDC, Compound_8_X2S, Compound_9_X2S, Compound_10_X2S, Compound_11_APEX, Compound_11_CCDC, Compound_11_X2S, Compound_12_APEX, Compound_12_Manual, Compound_12_X2S, Compound_13_Manual, Compound_13_X2S, Compound_14_CCDC, Compound_14_Manual, Compound_14_X2S, Compound_15_CCDC, Compound_15_Manual, Compound_15_X2S, Compound_16_APEX, Compound_16_CCDC, Compound_16_Manual, Compound_16_X2S, Compound_17_CCDC, Compound_17_X2S, Compound_18_APEX, Compound_18_CCDC, Compound_18_Manual, Compound_18_X2S, global. DOI: 10.1107/S0021889810042561/kk50742_CCDCsup2.hkl Structure factors: contains datablocks 2_CCDC. DOI: 10.1107/S0021889810042561/kk50743_CCDCsup3.hkl Structure factors: contains datablocks 3_CCDC. DOI: 10.1107/S0021889810042561/kk50744_CCDCsup4.hkl Structure factors: contains datablocks 4_CCDC. DOI: 10.1107/S0021889810042561/kk50745_CCDCsup5.hkl Structure factors: contains datablocks 5_CCDC. DOI: 10.1107/S0021889810042561/kk50746_CCDCsup6.hkl Structure factors: contains datablocks 6_CCDC. DOI: 10.1107/S0021889810042561/kk50747_CCDCsup7.hkl Structure factors: contains datablocks 7_CCDC. DOI: 10.1107/S0021889810042561/kk50748_CCDCsup8.hkl Structure factors: contains datablocks 8_CCDC. DOI: 10.1107/S0021889810042561/kk507411_CCDCsup9.hkl Structure factors: contains datablocks 11_CCDC. DOI: 10.1107/S0021889810042561/kk507414_CCDCsup10.hkl Structure factors: contains datablocks 14_CCDC. DOI: 10.1107/S0021889810042561/kk507415_CCDCsup11.hkl Structure factors: contains datablocks 15_CCDC. DOI: 10.1107/S0021889810042561/kk507416_CCDCsup12.hkl Structure factors: contains datablocks 16_CCDC. DOI: 10.1107/S0021889810042561/kk507417_CCDCsup13.hkl Structure factors: contains datablocks 17_CCDC. DOI: 10.1107/S0021889810042561/kk507418_CCDCsup14.hkl Structure factors: contains datablocks 18_CCDC. DOI: 10.1107/S0021889810042561/kk5074sup15.pdf Supplementary material file. DOI: Figures, tables and supplementary information"} +{"text": "A complete macromolecule modeling package must be able to solve the simplest structure prediction problems. Despite recent successes in high resolution structure modeling and design, the Rosetta software suite fares poorly on small protein and RNA puzzles, some as small as four residues. To illustrate these problems, this manuscript presents Rosetta results for four well-defined test cases: the 20-residue mini-protein Trp cage, an even smaller disulfide-stabilized conotoxin, the reactive loop of a serine protease inhibitor, and a UUCG RNA tetraloop. In contrast to previous Rosetta studies, several lines of evidence indicate that conformational sampling is not the major bottleneck in modeling these small systems. Instead, approximations and omissions in the Rosetta all-atom energy function currently preclude discriminating experimentally observed conformations from de novo models at atomic resolution. These molecular \u201cpuzzles\u201d should serve as useful model systems for developers wishing to make foundational improvements to this powerful modeling suite. It may seem self-evident that one should start modeling with the very smallest known sequences that take on well-defined 3D structures. For several reasons, such a completely reductionist approach has not been the mainstream strategy in the Rosetta community. First, in early days, most cases of naturally occurring ultra-small proteins were considered irregular and perhaps ill-defined, lacking the clear \u03b1 or \u03b2 secondary structure and hydrophobic cores that are the hallmarks of larger protein domains. Thus, initial Rosetta studies from the mid-1990s focused on 50- to 100- residue protein sequences that formed regular, clearly well-defined structures de novo modeling of short irregular loops and small proteins regularly appear as sub-problems in blind prediction targets and in the design of catalytic sites, conformationally switchable segments, and structured peptides. Predicting the structural features of these small systems and sub-systems \u2013 and even modeling the fine energetic balance between alternative structures \u2013 is no longer something to be avoided but instead a key goal of many Rosetta developers.These historical reasons to avoid small systems are no longer as relevant. First, since the late 1990s, several very small protein systems have been discovered or engineered and then clearly demonstrated to attain precisely defined 3D structures greater than 3 kcal/mol.) Note that the focus herein will be on recovering high-resolution features of the experimental models; thus an acceptable C\u03b1 RMSD should be 1 \u00c5 or lower, comparable to the differences between structures solved in different crystallographic space groups or with different binding partners. Further, the puzzle descriptions include discussion of side-chain conformations deemed experimentally stable and important for each molecule's fold and function.Each of the selected systems has been extensively characterized by numerous experimental structural and energetic methods. In particular, in each case, the free energy associated with the experimental conformation has been measured to be at least 3 kcal/mol more stable than the ensemble of unstructured states at room temperature. community. The Trp cage is a particularly well-characterized mini-protein with a length of 20 residues, engineered by truncating and optimizing exendin-4 from gila monster saliva de novo modeling, on the other hand, fails to solve this problem. While occasionally sampling a near-native conformation, Rosetta's fragment-assembly/all-atom-refinement protocol (\u201cabrelax\u201d) favors a tight cluster of structures with a backbone within 2 \u00c5 C\u03b1 RMSD of the native conformation but with the molecule's central tryptophan side-chain in an incorrect rotamer mini-proteins. The \u03b1-conotoxin GI, isolated from the fish-hunting marine snail de novo modeling (abrelax) gives low energy models that disagree with the crystal structure in all non-helical regions (not shown). The Stepwise Assembly algorithm yields even lower energy models that are still highly discrepant strategies versus StepWise protocols.]Excision of this segment and subsequent Rosetta de novo. than the optimized native models. Thus, at least for these four small puzzles, it is the poor discrimination of the Rosetta all-atom energy function that emerges as the critical problem. In previous studies, the difficulty of conformational sampling for larger problems as well the greater energy gaps attained in those problems likely masked flaws in the Rosetta energy function , these efforts have led to few substantial improvements in benchmarks or actual changes in the main codebase. One potential barrier is that the physics of solvation, H-bonds, and screened electrostatic interactions are strongly coupled to each other, and indeed are reflected in partly unified terms in most other molecular modeling force fields ; this code will be gladly provided to other academic users upon request.We summarize sequences and Rosetta command-lines for each of the four puzzles herein. Most of the calculations in the paper were carried out with Rosetta release 3.2; and all calculations will be implemented in the next Rosetta release. Remaining models were generated with the Rosetta codebase in the Das lab branch (revision number 40197), available to Rosetta developers did not give significantly lower energies. Both of these standard command lines are also executable with Rosetta release 3.2.A total of 20,000 models were generated for both i,j] of the target sequence onto clustered conformational ensembles derived from subfragments and . Each \u201cstep\u201d involves exhaustively sampling \u03c6/\u03c8 in 20\u00b0 increments, repacking side-chains, and minimizing. An example command-line for the step building subfragment Independent modeling runs were carried out with a novel StepWise Assembly (SWA) method. A full benchmark of this method is under preparation. Briefly, the method recursively builds each subfragment [stepwise_protein_test. -database -rebuild -out:file:silent_struct_type binary -fasta 2jof.fasta -n_sample 18 -nstruct 100 -cluster:radius 0.100 -extrachi_cutoff 0 -ex1 -ex2 -score:weights score12.wts -pack_weightspack_no_hb_env_dep.wts -add_peptide_plane -native 2jof.pdb -mute all -silent1 region_4_5_sample.cluster.out -tags1 S_0 -input_res1 4 5 -sample_res 3 4 -out:file:silent REGION_3_5/START_FROM_REGION_4_5_DENOVO_S_0/region_3_5_sample.out [SWA]2 resource. Optimized native conformations were also estimated with the StepWise Assembly method. To ensure a fair comparison, the entire calculation was repeated, but using Rosetta atom-pair constraints (with the Rosetta smoothed step function \u201cfade\u201d) to keep models with inter-residue C\u03b1-C\u03b1 distances within \u00b11 \u00c5 and the tryptophan rotamer in the native conformation. Explicitly, an example command line is:A complete directed acyclic graph (DAG) of the rebuild and clustering steps, along with associated commands in Condor format, was automatically generated by a master Python script. The script and a resulting example DAG are provided via Rosetta protocol capture (see below). The DAG was computed via DAGMAN with the Condor computing platform or with in-house Python scripts on the LSF queuing platform on 200 to 400 cores on Stanford's BioXstepwise_protein_test. -database -rebuild -out:file:silent_struct_type binary -fasta 2jof.fasta -n_sample 18 -nstruct 100 -cluster:radius 0.100 -extrachi_cutoff 0 -ex1 -ex2 -score:weights score12.wts -pack_weightspack_no_hb_env_dep.wts -add_peptide_plane -native 2jof.pdb -mute all -silent1 region_4_5_sample.cluster.out -tags1 S_0 -input_res1 4 5 -sample_res 3 4 -out:file:silent REGION_3_5/START_FROM_REGION_4_5_DENOVO_S_0/region_3_5_sample.out-cst_file 2jof_native_CA_CA_trp.cst [SWA NATIVE]2jof_native_CA_CA_trp.cst is provided by Rosetta protocol capture.and the constraint file ECCNPACGRHYSC. The methods for de novo modeling \u03b1-conotoxin were essentially the same as for Trp cage. However, the following command lines need to be run from the Das lab branch, which disables complications in disulfide input/output and scoring in the Rosetta release 3.2.The modeled sequence was: AbinitioRelax. -database -fasta 1not_.fasta -frag3 aa1not_03_05.200_v1_3 -frag9 aa1not_09_05.200_v1_3 -out:file:silent 1not_abrelax_CST_increase_cycles_no_hb_env_dep.out -out:file:silent_struct_type binary -nstruct 400 -cst_file 1not_native_disulf_CEN.cst -abinitio:relax -cst_fa_file 1not_native_disulf.cst -native 1not.pdb -increase_cycles 10 -score:weights score12_no_hb_env_dep.wts -ex1 -ex2 -extrachi_cutoff 0 [ABRELAX]andrelax. -database -s idealize_1not.pdb -fasta 1not_.fasta -frag3 aa1not_03_05.200_v1_3 -frag9 aa1not_09_05.200_v1_3 -out:file:silent 1not_native_relax.out -out:file:silent_struct_type binary -nstruct 200 -abinitio:relax -cst_fa_file 1not_native_disulf.cst -native 1not.pdb -increase_cycles 10 -score:weights score12.wts -ex1 -ex2 -extrachi_cutoff 0 [NATIVE RELAX]1not_native_disulf.cst; see protocol capture) enforces near-native disulfide bond lengths and angles between residue pairs and ; Rosetta atom-pair constraints are defined, penalizing S\u03b3\u2013S\u03b3 distances outside 1.5\u20132.5 \u00c5 and inter-residue S\u03b3-C\u03b2 distances outside 2.5\u20133.5 \u00c5. For both command-lines, 20,000 models were generated.The constraint file from subfragment are:stepwise_protein_test. -database -rebuild -out:file:silent_struct_type binary -fasta 1not.fasta -n_sample 18 -nstruct 100 -cluster:radius 0.100 -extrachi_cutoff 0 -ex1 -ex2 -score:weights score12.wts -pack_weightspack_no_hb_env_dep.wts -add_peptide_plane -cst_file 1not_native_disulf.cst -native 1not.pdb -mute all -silent1 region_4_5_sample.cluster.out -tags1 S_0 -input_res1 4 5 -sample_res 3 4 -out:file:silent REGION_3_5/START_FROM_REGION_4_5_DENOVO_S_0/region_3_5_sample.out [SWA]stepwise_protein_test. -database -rebuild -out:file:silent_struct_type binary -fasta 1not.fasta -n_sample 18 -nstruct 100 -cluster:radius 0.100 -extrachi_cutoff 0 -ex1 -ex2 -score:weights score12.wts -pack_weightspack_no_hb_env_dep.wts -add_peptide_plane -cst_file 1not_native_disulf_CA_CA.cst -native 1not.pdb -mute all -silent1 region_4_5_sample.cluster.out -tags1 S_0 -input_res1 4 5 -sample_res 3 4 -out:file:silent REGION_3_5/START_FROM_REGION_4_5_DENOVO_S_0/region_3_5_sample.out[SWA NATIVE]VGTIVTMEYRIDRVRLFVDKLDNIAEVPRVG . The Rosetta command-line made use of a recent loop modeling that leverages kinematic loop closure The chymotrypsin inhibitor sequence was the 62-residue truncated sequence from barley seeds: TEWPELVGKSVEEAKKVILQDKPEAQIIVLPloopmodel. -database -loops:remodelperturb_kic -loops:refinerefine_kic -loops:input_pdb 2ci2_min.pdb -in:file:native 2ci2.pdb -loops:loop_file 2ci2_35_45.loop -loops:max_kic_build_attempts 10000 -in:file:fullatom -out:file:fullatom -out:prefix 2ci2 -out:pdb -ex1 -ex2 \u2013extrachi_cutoff 0 -out:nstruct 200 -out:file:silent_struct_type binary -out:file:silent 2ci2_kic_loop35_45.out[KIC]10,000 KIC models were generated. Output files were rescored to generate RMSDs over just the rebuilt loops, using the command line:score. -database -in:file:silent 2ci2_kic_loop35_45.out -native 2ci2.pdb -out:file:scorefile 2ci2_kic_loop35_45.recalculate_rmsd.sc -in:file:silent_struct_type binary -in:file:fullatom -native_exclude_res 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62The optimized native conformation (2ci2_min.pdb) was generated by packing and minimizing side-chains, as described in StepWise Assembly methods were also applied to this case, but could not be directly compared to the KIC results because of difference in which degrees of freedom were optimized . gcuucggc. (Lower-case letters refer to nucleic acids in Rosetta.)The eight-nucleotide modeled RNA sequence, derived from residues 31\u201338 of a ribosomal fragment (PDB: 1F7Y), was:Fragment Assembly of RNA with Full-Atom Refinement rna_denovo. -random_delay 20 -database -fastagcuucggc.fasta -nstruct 200 -out::file::silent gcuucggc.out -minimize_rna -cycles 5000 -mute all -native gcuucggc_RNA.pdb [FARFAR]Optimized native conformations used a similar command line but drew fragments only from the crystallographic model that was the source of the puzzle:rna_denovo. -random_delay 20 -database -fastagcuucggc.fasta -nstruct 200 -out::file::silent gcuucggc_NATIVE.out -minimize_rna -cycles 5000 -mute all -native gcuucggc_RNA.pdb -vall_torsions 1f7y_native.torsions [FARFAR NATIVE]In both cases, 20,000 FARFAR models were generated. The native torsion file was generated by:rna_database. -database -s 1f7y_RNA.pdb -vall_torsions -o 1f7y_native.torsionsi,j] of the target sequence are modeled from clustered conformational ensembles for subfragments and in a recursive manner. Single-residues are enumeratively sampled in 20\u00b0 increments, repacking 2\u2032-OH groups, and minimizing. Here, an ideal Watson-Crick stem was assumed for residues 1\u20132 and 7\u20138, and the UUCG loop 3\u20136 was rebuilt from both ends and connected by CCD loop closure. An example command-line for the basic rebuild step building residue 3 onto the starting stem in either de novo and native-optimization runs are:For RNA modeling cases, StepWise Assembly provides a more efficient sampling method . Analogous to protein cases (A) and (B), sub-fragments [rna_swa_test. -algorithm rna_resample_test -database -fastagcuucggc.fasta -output_virtual -cluster:radius 0.100 -num_pose_kept 100 -score:weights rna_hires_2008.wts -native motif2_1f7y_RNA.pdb -rna_torsion_potential rd2008 -s1 gcgc.pdb -input_res1 1 2 7 8 -out:file:silent REGION_0_1/START_FROM_REGION_0_0/region_0_1_sample.out -sample_res 3[SWA]rna_swa_test. -algorithm rna_resample_test -database -fastagcuucggc.fasta -output_virtual -cluster:radius 0.100 -num_pose_kept 100 -score:weights rna_hires_2008.wts -native motif2_1f7y_RNA.pdb -cst_fileuucg_polar_fade.cst -sampler_native_rmsd_screen -sampler_native_rmsd_screen_cutoff 1.500 -rna_torsion_potential rd2008 -s1 gcgc.pdb -input_res1 1 2 7 8 -out:file:silent REGION_0_1/START_FROM_REGION_0_0/region_0_1_sample.out -sample_res 3[SWA NATIVE]In the latter command-line, a constraint file penalizes conformations in which contacting (within 4 \u00c5) polar heavy atoms are placed beyond 1 \u00c5 from their native distances; the file is provided in the protocol capture (see next).https://svn.rosettacommons.org/source/trunk/RosettaCon2010/protocol_capture/rhiju_four_small_puzzles.All files, including fragments, sequence files (.fasta), native conformations (.pdb), as well as example logs are being provided via \u201cprotocol capture\u201d in the Rosetta Subversion repository: The directory will be gladly provided to readers without access to the repository upon request."} +{"text": "To the Editor: Kala-azar, or visceral leishmaniasis, is a parasitic disease that leads to fever, anemia, and hepatosplenomegaly. Death is the usual outcome when infection is not treated. The majority of infections are caused by the protozoan Leishmania donovani, restricted to India and eastern Africa, but the most widespread are caused by L. infantum, found from People\u2019s Republic of China to the New World, where it infects humans, dogs, and wild canids. All Mediterranean countries are affected by L. infantum, where most patients are co-infected with HIV. Several species of sand flies transmit the disease , possibly by insect eggs or larvae being carried in organic matter.During the 1980s, urban transmission of kala-azar became a major problem in Brazil. More than 3,000 cases are reported annually, and the disease has spread from northeastern Brazil westward to the Amazon region, as well as to the industrialized southeast. Several as yet unproven explanations for the urbanization of kala-azar in Brazil have been proposed are only 3\u20134\u00b0C lower than those of S\u00e3o Borja, Rio Grande do Sul state, the southernmost city where L. longipalpis transmits kala-azar, and even warmer than Chajar\u00ed, Argentina , at the highest southern latitude where this vector is found (Kala-azar has now reached the temperate Brazilian south and Argentina. This spread of the disease warns us of the danger of introduction in other temperate areas. Europe is particularly vulnerable because of the existing natural transmission of orizonte , BraziliL. longipalpis sand flies in Lisbon, the situation could change dramatically, and kala-azar might become a major urban disease in Europe. The International Health Regulations recommends disinfection of aircraft by preflight and blocks-away spraying with pyrethroids (L. longipalpis sand flies was recently described in Brazil (L. longipalpis sand flies, such as minimum temperature tolerance, mechanisms of urban spread, presence in aircraft, and role in inducing more severe disease._______________________________________________________________________________________________________________________________Human kala-azar is less common in Europe, possibly because sand flies there are less anthropophilic. If aircraft introduce anthropophilic"} +{"text": "In Page 6, right column, second paragraph, COG1517 should be COG1571.HVO_2477 should be HVO_2744 and HVO_B0354 should be HVO_B0357.In Table\u2009\u20091, page 7, In Page 8, right column, end of last paragraph, HVO_2477 should be HVO_2744.HVO_B0354 should read VDC2399 H26 \u0394HVO_B0357. In Supplemental Table\u2009\u20091, the entry VDC2399 H26 \u0394HVO_2477 entries should be HVO_2744, and all HVO_B0354 entries should be HVO_B0357.In Supplemental Table\u2009 4, all These corrections do not influence the overall conclusions of this study."} +{"text": "Computational design of protein function involves a search for amino acids with the lowest energy subject to a set of constraints specifying function. In many cases a set of natural protein backbone structures, or \u201cscaffolds\u201d, are searched to find regions where functional sites can be placed, and the identities of the surrounding amino acids are optimized to satisfy functional constraints. Input native protein structures almost invariably have regions that score very poorly with the design force field, and any design based on these unmodified structures may result in mutations away from the native sequence solely as a result of the energetic strain. Because the input structure is already a stable protein, it is desirable to keep the total number of mutations to a minimum and to avoid mutations resulting from poorly-scoring input structures. Here we describe a protocol using cycles of minimization with combined backbone/sidechain restraints that is Pareto-optimal with respect to RMSD to the native structure and energetic strain reduction. The protocol should be broadly useful in the preparation of scaffold libraries for functional site design. There has been recent progress in the computational design of functional proteins and in the prediction of biomolecular interactions across a wide range of problems: ligand-protein Most crystal structures will have regions of high energy as evaluated in Rosetta or other design programs, which will lead to sequence changes in design if they are not addressed. However most minimization protocols will lead to too much deviation from the original wild-type crystal structure. The question is how to properly balance energy minimization with reduction of structural deviation from the starting structure. The concept of optimizing a structure to the energy function in use has a long precedent \u2013 for example the equilibration of structures in molecular dynamics prior to a production run Here we carry out a systematic examination of a number of structure refinement methods, evaluating them for optimality (minimization of Rosetta energy and RMSD from the starting structure simultaneously) and for their influence on subsequent sequence re-design. We found that a combination of harmonic backbone and sidechain coordinate restraints minimizes all-atom RMSD and Rosetta energy together, and reduces the number of sequence changes in subsequent design. Because the restrained relax protocol examined here results in fewer sequence alterations in design, it also minimizes the amount of human intervention required in the overall design process.primum non nocere, or \u201cfirst do no harm\u201d) while also minimizing the Rosetta energy. One possibility in preparing structures for Rosetta is to re-refine the structure using an energy function incorporating standard Rosetta score terms as well as the correspondence to electron density Our goal in evaluating structure minimization protocols was first to minimize RMSD to the native to understand energetic changes effected by the protocol at a residue-by-residue level. The high-energy residues about 5 REU are all moved to lower energy by the protocol; residues in the 1\u20135 REU range are largely moved to lower energy; residues with negative energy remain mostly unchanged . The design protocol on the native places a glutamate at the position 14, in place of native tyrosine . Briefly, these structures were prepared for Rosetta by removal of waters and non-canonical amino acids and preparation of Rosetta parameter files for the ligands. Structure selection is described fully in Niv\u00f3n and Bjelic.The 41 protein monomer sequence recovery set has been described elsewhere min on one core of a mixed Intel-Xeon-L5335-2GHz/AMD-Opteron-2.2GHz cluster and roughly doubling in time for each additional 100 residues.The fast relax protocol in Rosetta is described elsewhere Harmonic coordinate restraints take the form f(x) \u200a=\u200a (d/sd)?2, where d is the distance of the atom from the desired coordinate, and sd is a parameter related to the strength of the restraint. Bounded coordinate restraints take a zero value within WIDTH of the desired coordinate, followed by a small harmonic segment to transition (from WIDTH to WIDTH +0.5*sd) and a linear value of slope set at 1/sd for the rest. Sidechain-sidechain restraint runs have harmonic distance restraints of sd\u200a=\u200a2.0 for all sidechain atom pairs within the specified cutoff distance, and backbone heavy atom harmonic coordinate restraints of sd\u200a=\u200a0.5.Parameter scans were performed with the flags:\u201c-no_optH false -flip_HNQ -use_input_sc -correct -no_his_his_pairE -linmem_ig 10 -nblist_autoupdate true\u201dHarmonic runs added flags \u201c-constrain_relax_to_start_coords -relax:ramp_constraints false -relax:coord_constrain_sidechains -relax:coord_cst_stdev \u201d, where SD ranged from 0.000001 to 5.0Bounded runs added flags \u201c-constrain_relax_to_start_coords -relax:ramp_constraints false -relax:coord_constrain_sidechains -relax:coord_cst_stdev -relax:coord_cst_width \u201d, where SD ranged from 0.1 to 5.0, and WIDTH ranged from 0 to 1.0.Sidechain-sidechain runs added flags \u201c-constrain_relax_to_start_coords-relax:ramp_constraints false -relax:sc_cst_maxdist \u201d where DIST ranged from 3 to 8.Unrestrained runs had no additional flags.RMSD values were computed in pymol with the command \u201calign relaxed and not hydro, reference and not hydro, cycles\u200a=\u200a0\u201d. The median per-residue energy and RMSD of 10 replicates of the relax procedure were taken as the value for that protein, and the mean value across the 51 proteins in the input set were computed for each of the different parameter runs.Design was performed with the Rosetta enzyme design program, using the default parameters from the enzdes scientific test in rosetta (rosetta/rosetta_tests/scientific/biweekly/tests/enzdes_benchmark). Briefly, a design shell of 6 \u00c5, with a 12 \u00c5 shell of repackable residues was used. One cycle of design with a \u201csoft\u201d Lennard Jones repulsive term was followed by one with the standard repulsive term. Extra rotamers were included based on the Dunbrack rotamer distribution, and recent updates to the Rosetta score function were also included \u2013l./inputs/pdb.list\u2013enzdes::detect_design_interface\u2013enzdes::cut1 6.0\u2013enzdes::cut2 8.0\u2013enzdes::cut3 10.0\u2013enzdes::cut4 12.0\u2013enzdes::cst_design\u2013enzdes::design_min_cycles 2\u2013enzdes::cst_min\u2013enzdes::chi_min\u2013ex1\u2013ex2\u2013ex1aro\u2013ex2aro\u2013extrachi_cutoff 1\u2013soft_rep_design\u2013flip_HNQ\u2013correct\u2013no_his_his_pairE\u2013score::hbond_params correct_params\u2013lj_hbond_hdis 1.75\u2013lj_hbond_OH_donor_dis 2.6\u2013dun08 false\u2013nstruct 1\u2013enzdes::no_unconstrained_repack\u2013linmem_ig 10\u2013nblist_autoupdate true\u2013enzdes::lig_packer_weight 1.8\u2013docking::ligand::old_estat true\u2013extra_res_fa inputs/2b3b.params inputs/2ifb.params inputs/1sw1.params inputs/2FQX.params inputs/2p0d.params inputs/2DRI.params inputs/1fby.params inputs/1ZHX.params inputs/2RDE.params inputs/1db1.params inputs/2h6b.params inputs/1z17.params inputs/2FME.params inputs/1y3n.params inputs/1urg.params inputs/1FZQ.params inputs/1y52.params inputs/1POT.params inputs/1XT8.params inputs/2FR3.params inputs/2UYI.params inputs/1USK.params inputs/1n4h.params inputs/2qo4.params inputs/2GM1.params inputs/2rct.params inputs/2HZQ.params inputs/1hsl.params inputs/1A99.params inputs/1uw1.params inputs/1l8b.params inputs/3B50.params inputs/1H6H.params inputs/2Q2Y.params inputs/1hmr.params inputs/1OPB.params inputs/1x7r.params inputs/2Q89.params inputs/1nl5.params inputs/1TYR.params inputs/2e2r.params inputs/1LKE.params inputs/2PFY.params inputs/1wdn.params inputs/1nq7.params inputs/1y2u.params inputs/2ioy.params inputs/1J6Z.params inputs/1RBP.params inputs/1XZX.params inputs/2f5t.params\u2013chemical:exclude_patches LowerDNA UpperDNA Cterm_amidation SpecialRotamerVirtualBB ShoveBB VirtualDNAPhosphate VirtualNTerm CTermConnect sc_orbitalspro_hydroxylated_case1 pro_hydroxylated_case2 ser_phosphorylatedthr_phosphorylated tyr_phosphorylated tyr_sulfated lys_dimethylatedlys_monomethylated lys_trimethylated lys_acetylated glu_carboxylatedcys_acetylated tyr_diiodinated N_acetylated C_methylamidatedMethylatedProteinCtermFile S1Supporting figures and tables. Table S1. Energy comparison between input and restrained-relax structures. Average over all energy terms for the input set (INPUT) and the set relaxed with sidechain coordinate restraints at sd\u200a=\u200a0.5 (COORD). Values are sorted by the difference in each score term. Energy terms fa_dun (sidechain rotamer probability-based energy) and fa_rep (Lennard-Jones repulsive term) are the largest contributors to lower energy after the relax. Table S2. The top-10 most-improved residues after the relax protocol for 1A99. Score term changes are shown for fa_dun and fa_rep score terms only. INPUT is for the original wild-type crystal structure. COORD is for the coordinate restrained relax structures. Delta is the difference between INPUT and COORD for the specified score term and residue. Table S3. Residue 14 energies for 2ifb.pdb before/after restrained relax and before/after design. Note that after the restrained relax the total energy for residue 14 (\u22121.17) is much lower, even slightly lower than the energy after enzyme design on an unrelaxed input structure (\u22121.14). Figure S1. Per-residue score comparison between an input and restrained-relax structure. Histogram of individual residue Rosetta scores for the putrescine receptor (1A99.pdb) unmodified (blue) and after the coordinate-restrained relax protocol (red), with overlap in purple. Figure S2. Example of improved design after restrained relax. 2ifb position 14 example, native in green, design on the unmodified native in cyan, and design on the restrained-relax native in purple. The wild-type identity at position 14 is mutated from tyrosine to glutamate in the run on an unmodified structure (cyan), but remains the original tyrosine in the run on the restrained-relax structure (purple).(DOCX)Click here for additional data file."} +{"text": "The purpose of the presented study was to trace orthologs of btd in other insects and reconstruct the evolutionary history of the Sp genes within the metazoa.The Sp-family of transcription factors are evolutionarily conserved zinc finger proteins present in many animal species. The orthology of the Sp genes in different animals is unclear and their evolutionary history is therefore controversially discussed. This is especially the case for the Sp gene Tribolium castaneum), a hemimetabolous insect (Oncopeltus fasciatus), primitively wingless hexapods (Folsomia candida and Thermobia domestica), and an amphipod crustacean . We supplemented this data set with data from fully sequenced animal genomes. We performed phylogenetic sequence analysis with the result that all Sp factors fall into three monophyletic clades. These clades are also supported by protein domain structure, gene expression, and chromosomal location. We show that clear orthologs of the D. melanogaster btd gene are present even in the basal insects, and that the Sp5-related genes in the genome sequence of several deuterostomes and the basal metazoans Trichoplax adhaerens and Nematostella vectensis are also orthologs of btd.We isolated Sp genes from representatives of a holometabolous insect (Sp5/btd ortholog, which strongly suggests that btd is not the result of a recent gene duplication, but directly traces back to an ancestral gene already present in the metazoan ancestor.All available data provide strong evidence for an ancestral cluster of three Sp-family genes as well as synteny of this Sp cluster and the Hox cluster. The ancestral Sp gene cluster already contained a Sp1 have been identified in the human genome, and homologous genes have been isolated from several other animal species as well codes for a member of the Sp-family, which represents an important factor for the formation of several head segments and is also involved in the development of the central and peripheral nervous system , Td_Sp1-4 [EMBL: FN562988], Td_Sp5/btd [EMBL: FN562989], Td_Sp6-9 [EMBL: FN562990], Fc_Sp1-4 [EMBL: FN562985], Fc_Sp5/btd [EMBL: FN562986], Fc_Sp6-9 [EMBL: FN562987], Ph_Sp1-4 [EMBL: FN562991], Ph_Sp6-9 [EMBL: FN562992]. BLAST analysis was used to identify the Sp1-4 homologue of D. melanogaster and T. castaneum. Gene specific primers were made to amplify Tc_btd [GenBank: NM_001114320.1], Tc_Sp8 [GenBank: NM_001039420] and Tc_Sp1-4 [GenBank: XM_967159] from T. castaneum cDNA, as well as Dm_btd [GenBank: NM_078545], Dm_D-Sp1 [GenBank: NM_132351] and Dm_CG5669 (Sp1-4) [GenBank: NM_142975] from D. melanogaster cDNA. The sequences of these primers are given in Additional File H. sapiens (Genome Reference Consortium Human Build 37 (GRCh37), Primary_Assembly) , Dm_Btd [GenBank: NP_511100], Dm_D-Sp1 [GenBank: NP_572579], Dps_GA19045 [GenBank: XP_001358829], Dps_GA22354 [GenBank: XP_002134535], Dps_GA12282 [GenBank: XP_001354397], Ag_Sp1-4 [GenBank: NZ_AAAB02008898], Ag_Sp5/Btd [GenBank: NZ_AAAB02008847], Ag Sp6-9 [GenBank: NZ_AAAB01008847]; Nav_Sp1-4 [GenBank: XP_001599101], Nav_Sp5/Btd [GenBank: AAZX01008599], Nav_Sp6-9 [GenBank: XP_001606079], Am_Sp1-4 [GenBank: XP_624316.2], Am_Sp5/Btd [GenBank: XP_001119912], Am_Sp6-9 [GenBank: XP_624528], Bm_Sp1-4 [GenBank: BABH01010251], Bm_Sp5/Btd [GenBank: BABH01024462], Bm_Sp6-9 [GenBank: AADK01002198], Tc_Sp1-4 [GenBank: XP_972252], Tc_Btd [GenBank: NP_001107792], Tc_Sp8 [GenBank: NP_001034509], Of_Sp8/9 [EMBL: FN396612], Nv_Sp1-4 [GenBank: XP_001635004], Nv_Sp5/Btd [GenBank: XP_001635002], Nv_Sp6-9 [GenBank: XP_001634948], Sp_Sp1-4 [GenBank: XR_025838], Sp_Sp5/Btd [GenBank: XP_789110.1], Sp_Sp6-9 [GenBank: XP_793203.2], Hs_Sp1 [GenBank: NP_612482], Hs_Sp2 [GenBank: NP_003101], Hs_Sp3 [GenBank: NP_003102], Hs_Sp4 [GenBank: NP_003103], Hs_Sp5 [GenBank: NP_001003845], Hs_Sp6 [GenBank: NP_954871], Hs_Sp7 [GenBank: NP_690599], Hs_Sp8 [GenBank: NP_874359], Hs_Sp9 [GenBank: NP_001138722], Mm_Sp1 [GenBank: NP_038700], Mm_Sp2 [GenBank: NP_084496], Mm_Sp3 [GenBank: NP_035580], Mm_Sp4 [GenBank: NP_033265], Mm_Sp5 [GenBank: NP_071880], Mm_Sp6 [GenBank: NP_112460], Mm_Sp7 [GenBank: NP_569725], Mm_Sp8 [GenBank: NP_796056], Mm_Sp9 [GenBank: NP_001005343], Dr_Sp1 [GenBank: NP_997827], Dr_Sp2 [GenBank: NP_001093452], Dr_Sp3 [GenBank: NP_001082967], Dr_Sp3-like [GenBank: XP_691096], Dr_Sp4 [GenBank: NP_956418], Dr_Sp5 [GenBank: NP_851304], Dr_Sp5-like [GenBank: NP_919352], Dr_Similar_to_Sp5 [GenBank: XP_001335730], Dr_Sp6 [GenBank: NP_991195], Dr_Sp7 [GenBank: NP_998028], Dr_Sp8 [GenBank: NP_998406], Dr_Sp8-like [GenBank: NP_991113], Dr_Sp9 [GenBank: NP_998125], Gg_Sp1 [GenBank: NP_989935], Gg_Sp2 [GenBank: XP_423405], Gg_Sp3 [GenBank: NP_989934], Gg_Sp4 [GenBank: XP_418708], Gg_Sp5 [GenBank: NP_001038149], Gg_Sp8 [GenBank: AAU04515.1], Gg_Sp9 [GenBank: AAU04516.1], Fr_Sp1 [GenBank: CAAB01000453.1], Fr_Sp2 [GenBank: CAAB01001586.1], Fr_Sp3 [GenBank: CAAB01000508.1], Fr_Sp3-like [GenBank: CAAB01000254.1], Fr_Sp4 [GenBank: CAAB01001019.1], Fr_Sp5 [GenBank: CAAB01001064.1], Fr_Sp5-like [GenBank: CAAB01000006.1], Fr_Sp6 [GenBank: CAAB01004244.1], Fr_Sp7 [GenBank: CAAB01000453.1], Fr_Sp8 [GenBank: CAAB01001019.1], Fr_Sp9 [GenBank: CAAB01000508.1]. In addition, we have provisionally annotated the Sp-family genes of D. pulex, T. adhaerens and B. floridae using the following genomic regions: Dp_Sp1-4 , Dp_Sp5/btd , Dp_Sp6-9 , Ta_Sp1-4 , Ta_Sp5/btd , Ta_Sp6-9 , Bf_Sp1-4 , Bf_Sp5/btd , Bf_Sp6-9 .Click here for fileGenomic locations of Sp genes and Hox genes. This table supplements the schematic overview given in Fig. H. sapiens: Genome Reference Consortium Human Build 37 (GRCh37), Primary_Assembly; D. melanogaster: release 5.10, A. gambiae: AgamP3.3, A. mellifera: Amel_4.0, T. castaneum: Tcas_3.0, D. pulex: JGI-2006-09, N. vectensis: Nematostella vectensis v1.0. The data for the N. vectensis Hox genes can be found in the references given in the table. Alternating shading for different species is used in the table to enhance the legibility of the table. Abbreviations: LG, linkage group; un, unassembled portions of the genome.Click here for fileSequence information about the gene specific primers used in this study. The first column gives the species and the gene. The second column gives the primer sequences in 5' to 3' orientation. The third column gives the length of the cloned fragment resulting from the PCR with the given primers. The fourth column gives the clone ID number. The fifth column gives the polymerase used to transcribe the RNA probe used for in situ hybridizations. The primers for D. melanogaster and T. castaneum have been designed as gene specific pairs using the genome sequence information. For O. fasciatus, T. domestica, F. candida, and P. hawaiensis we first isolated a small fragment of the genes using degenerate primers specified in the Materials and methods section. The gene specific RACE primers were designed on the basis of this sequence information and were used in conjunction with the commercial RACE adaptor primers. The cloned fragment of Ph Sp6-9 resulted from priming of the given primer pair. Abbreviations: Fwd, forward; Rev, reverse.Click here for file"} +{"text": "It is understood that cancer is a clonal disease initiated by a single cell, and that metastasis, which is the spread of cancer from the primary site, is also initiated by a single cell. The seemingly natural capability of cancer to adapt dynamically in a Darwinian manner is a primary reason for therapeutic failures. Survival advantages may be induced by cancer therapies and also occur as a result of inherent cell and microenvironmental factors. The selected \"more fit\" clones outmatch their competition and then become dominant in the tumor via propagation of progeny. This clonal expansion leads to relapse, therapeutic resistance and eventually death. The goal of this study is to develop and demonstrate a more detailed clonality approach by utilizing integrative genomics.Patient tumor samples were profiled by Whole Exome Sequencing (WES) and RNA-seq on an Illumina HiSeq 2500 and methylation profiling was performed on the Illumina Infinium 450K array. STAR and the Haplotype Caller were used for RNA-seq processing. Custom approaches were used for the integration of the multi-omic datasets.Reported are major enhancements to CloneViz, which now provides capabilities enabling a formal tumor multi-dimensional clonality analysis by integrating: i) DNA mutations, ii) RNA expressed mutations, and iii) DNA methylation data. RNA and DNA methylation integration were not previously possible, by CloneViz (previous version) or any other clonality method to date. This new approach, named iCloneViz (integrated CloneViz) employs visualization and quantitative methods, revealing an integrative genomic mutational dissection and traceability thru the different layers of molecular structures.The iCloneViz approach can be used for analysis of clonal evolution and mutational dynamics of multi-omic data sets. Revealing tumor clonal complexity in an integrative and quantitative manner facilitates improved mutational characterization, understanding, and therapeutic assignments. It is recognised that cancer is a clonal disease instigated by a single cell and that metastasis is also commenced thru a single cell -3. TumorEvolution is an important scientific concept because it works. It provides a framework to explain changes in biological systems. Cancer is the result of an evolutionary process, but it is destructive, since it involves the loss of mechanisms that are implemented to protect against uncontrolled and undifferentiated growth. Ultimately, natural selection has a harsh reality that worried Darwin, namely, all that seems to matter is reproductive success .MM is a cancer of the bone marrow characterized by a malignant transformation and proliferation of plasma cells . DefinitThere exist an array of computational methods/tools that allow one to characterize various aspects of the clonal architecture of a tumor(s). Each method employs different computational and visualization techniques, and are very briefly described. SciClone allows fClomial is anothAll of the aforementioned referenced methods only make use of a single modality in their characterizations of clonal architecture, namely, DNA-based mutational data, culled from whole genome or whole exome sequencing (WES) experiments. Some methods also attempt to account for the effect of copy number variation on clonal architecture. This is contrasted to the multiple modality datasets employed in the characterization of clonal structure used within iCloneViz. iCloneViz is the only known computational method for inferring clonal architecture, which integrates multiple modality datasets to derive deeper biological meaning. For example, RNA variant calling is performed to detect whether or not a mutation found within the DNA (via a WES or WGS experiment) is detectable within a RNA transcript. Further, if a mutation is found to be present at the RNA level, the expression values associated with the transcript(s) containing the mutation can be quantified and visualized. Finally, DNA methylation data is integrated into the analysis, which could lead to hypotheses regarding methylation suppressing the expression of a tumor suppressor gene (TSG). iCloneViz tracks TSGs both by mutations and epigenetic/methylation events.Presentation) and then later when the cancer recurred . A novel bioinformatic approach named CloneViz .W: WES data in relation W for the patient experiment with the experiment identifier exp_id[i].\u25cf filter_settings: Data structure containing all filter settings including:\u25cf min_vaf: Minimum variant allele frequency (default 4%).\u25cb max_vaf: Maximum variant allele frequency (default 100%).\u25cb min_depth: Minimum read depth (default 20).\u25cb max_depth: Maximum read depth (default 1000).\u25cb min_meth: Minimum methylation percent (default 25%).\u25cb opacity: Opacity of scatter plot points (default 50%).\u25cb show_kg_only: Boolean indicating whether to show only mutations found in the key genes list K .\u25cb default_filter_settings: Default values used for filtering.\u25cf /*******************************************************Name & Purpose: Main, program entry pointInputs: NoneProcessing: Establish main processing loop for iCloneVizOutputs: NoneReturns: NoneAuthors: D. Johann, E. Peterson********************************************************/main{while (window is open){patient_id = display_patient_search;button_selection, exp_id = display_patient_experiments(patient_id);process;}}/*******************************************************Name & Purpose: Display Patient Search, via patient ID display patient meta-dataInputs: noneProcessing: Display patient meta-dataOutputs: NoneReturns: ID of selected patient from MPMDBAuthors: D. Johann, E. Peterson********************************************************/display_patient_search{patient_id = input from user;P;- get / display patient meta-data via relation return patient_id;}/*******************************************************Name & Purpose: Display Patient Experiments, show available experiments for iCloneViz analysisInputs: Patient IDProcessing: Retrieve patient experimental meta-data from MPMDBOutputs: Patient experimental data now in memoryReturns: Array of experiment IDs, Button selection for selected analysis, eg, Genomic Real Estate or Paired Scatter Plot or KDE plus Scatter PlotAuthors: D. Johann, E. Peterson********************************************************/display_patient_experiments(patient_id){P;- get / display all experimental data from database (MPMDB) for patient via relation exp_id = selected experiment ids;return button_selection, exp_id}/*******************************************************Name & Purpose: Process, process data and display visualization based on the user's choice of visualizationInputs: Button selection, & Experiment IDsProcessing: Execute specific function to handle processing based on user's choice of visualizationOutputs: NoneReturns: NoneAuthors: D. Johann, E. Peterson********************************************************/process{if genomic_real_estate(exp_id[0]);else if (button_selection == 'Paired Scatter Plot')paired_scatter_plot(exp_id[0...1]);else if (button_selection == 'KD + Scatter Plot')kd_plus_scatter_plot(exp_id[0]);}/*******************************************************Name & Purpose: Genomic Real Estate, fetch WES-based mutation data, using R.NET to generate R plot and visualizeInputs: Experiment ID of experiment to visualizeProcessing: Using R.NET API, generate plot image and display in windowOutputs: Scatter plot of all mutations by chromosome and variant allele frequency, read depth is encoded by color, see Additional File 2Returns: NoneAuthors: D. Johann, E. Peterson********************************************************/genomic_real_estate(exp_id){W;- execute query to MPMDB to fetch mutation data based on the exp_id and DB relation - establish the R.NET interface and invoke R;- divide x-axis into 24 sections - scale each section by the length of each chromosomew in Wfor each {w along x-axis by position, the y-axis by variant allele frequency, and color by read depth;- plot }- export plot as image file;- return execution to .NET;- display image file in windows form;}/*******************************************************Name & Purpose: Paired Scatter Plot, fetch filter settings and call function to display paired scatter plotInputs: Experiment IDsProcessing: If first time displaying, use default filter settings to display paired plot, otherwise, fetch filter settings from user and display paired plotOutputs: NoneReturns: NoneAuthors: D. Johann, E. Peterson********************************************************/paired_scatter_plot(exp_id){display_paired_scatter_plot;while (window is open){filter_settings = read_filter_toolbar;display_paired_scatter_plot;}}/*******************************************************Name & Purpose: Display Paired Scatter Plot, calculate WES mutations in common and exclusive to each experiment and generate paired scatter plotInputs: Filter settings fetched from user input, and Experiment IDsProcessing: Fetch WES mutations, and using filter settings and referenced equations, calculate mutations in common and exclusive to each experiment; display paired scatter plot with data, see Additional File 3Outputs: A paired scatter plot for each experiment in exp_id array, based on variant allele frequencyReturns: NoneAuthors: D. Johann, E. Peterson********************************************************/display_paired_scatter_plot{// Relational algebra Eq. (9)common = exp_id[0].W \u2229 exp_id[1].W;// Relational algebra Eq. (10)exp0_unique = exp_id[0].W \\ exp_id[1].W;// Relational algebra Eq. (10)exp1_unique = exp_id[1].W \\ exp_id[0].W;K;- label all genes in if (filter_settings.show_kg_only == true)- hide all non-labelled points;- plot common, exp0_unique, and exp1_unique based on DNAllelicFreq using filter-settings;}/*******************************************************Name & Purpose: Read Filter Toolbar, gather user-defined filter settings, and display \"TSG Methylation Table\", \"WES Table\", or \"RNA Table\" if the user so desiresInputs: NoneProcessing: Collect filter settings from user; if the user clicks on \"Filter\" the filter settings are returned and they are used when displaying a desired plot; if the user clicks on \"Show TSG Methylation Table\", \"Show WES Table\", or \"Show RNA Table\", the desired table is displayed using the referenced equationsOutputs: Tables selected if clickedReturns: User-defined filter settingsAuthors: D. Johann, E. Peterson********************************************************/read_filter_toolbar{while (true){filter_settings.min_vaf = user input minimum VAF;filter_settings.max_vaf = user input maximum VAF;filter_settings.min_depth = user input minimum read depth;filter_settings.max_depth = user input maximum read depth;filter_settings.min_meth = user input minimum methylation percent;filter_settings.opacity = user input scatter plot point opacity;filter_settings.show_kg_only = user input show key gene only;if (button_click_filter)return filter_settings;\u00a0\u00a0else if (button_click_show_tsg_methylation_table)- display TGS Methylation Table via Relational algebra Eq. (13);\u00a0\u00a0break;\u00a0\u00a0else if (button_click_show_wes_table)- display WES Table Eq. (11);\u00a0\u00a0break;\u00a0\u00a0else if (button_click_show_rna_table)- display RNA Table Eq. (14);\u00a0\u00a0break;\u00a0\u00a0}return filter_settings;}/*******************************************************Name & Purpose: KD Plus Scatter Plot, call subroutines to process and render data for each of the individual plot areas, as well as, to calculate various metircsInputs: Experiment ID of experiment to visualizeProcessing: Call subroutines to process and render the individual plot areas; call subroutine to calculate various metricsOutputs: NoneReturns: NoneAuthors: D. Johann, E. Peterson********************************************************/kd_plus_scatter_plot(exp_id){display_kd_plot;display_scatter_plot;calculate_metrics;while (window is open){filter_settings = read_filter_toolbar;display_kd_plot;display_scatter_plot;calculate_metrics;}}/*******************************************************Name & Purpose: Display KD Plot, displays the kernel density estimation curve, using the 'ks' R package to calculate an appropriate bandwidthInputs: User-defined filter settings and the experiment ID to be visualizedProcessing: Calculate the KDE for DNA mutations, use R.NET to utilize the 'ks' package Outputs: KD curve, see Figures 1 & 2Returns: NoneAuthors: D. Johann, E. Peterson********************************************************/display_kd_plot{- calculate KDE based on Relational algebra Eq. (11) and Eq. (2) for all mutations in \u0174 and weighted by copy number;- utilize R.NET to call 'hpi' function in the R 'ks', for bandwidth calculation;- display KD plot using filter_settings;}/*******************************************************Name & Purpose: Display Scatter Plot, displays the DNA mutational scatter plot, and tooltip containing RNA expression based info if availableInputs: User-defined filter settings and the experiment ID to be visualizedProcessing: Displays a scatter plot point for each DNA mutation, set each glyph depending on the degree of modality data available, and populate tooltip with RNA expression data if availableOutputs: Mutational scatter plot, see Figures 1 & 2Returns: NoneAuthors: D. Johann, E. Peterson********************************************************/display_scatter_plot{- calculate \u0174 by Relational algebra Eq. (11) using filter_settings;- set glyph for all DNA mutations to be a blue circle and place along a-axis according to variant allele frequency and along y-axis by depthfor each tuple \u0175 in \u0174 having RNA data calculate {- update point glyph, ;- build hover-over tooltip to contain: gene name, transcript(s) ids and FPKM(s) from }- display scatter plot for each mutation in \u0174 with associated tooltip (if RNA data is available);}/*******************************************************Name & Purpose: Calculate Metrics, calculate various metrics for displayInputs: User-defined filter settings and the experiment ID to be visualizedProcessing: Calculate SDI, total number of mutations, and total number of key gene mutations, for the data being visualizedOutputs: Calculated metricsReturns: NoneAuthors: D. Johann, E. Peterson********************************************************/calculate_metrics{- calculate SDI using Eq. (1) and relation \u0174 ) using filter_settings;- calculate total number of mutations in relation \u0174 using filter_settings;K which are found in relation \u0174 using filter_settings;- calculate total number of key genes from - display metrics;}The authors declare that they have no competing interests.DJJ and EAP conceived and designed the study. EAP, MAB, SSC, CJH, NW and DJJ performed experiments and analyses. DJJ and EAP designed the software. EAP, DJJ and MAB implemented the software. DJJ and EAP wrote the manuscript. All authors approved the manuscript.Muti-omic relational integration. Illustration of relations used to integrate multi-omic datasets. The upper-right box (Relations & Attributes) defines all relations and their attributes. The left-most section (iCloneViz DB) lists each dataset and a name / identifier for each. Middle section (Multi-Omic Integration) illustrates the relationships and integration of each dataset using intermediate relations. Each intermediate integration is annotated with the attributes used in each combination. Each intermediate relation is further annotated with the equation used in the manuscript to form the given relation. The final multi-omic \"Integrated Relation\" is shown in the lower right.Click here for fileA corresponds to the Presentation sample and B to Relapse. These provide a general view of the inherent mutational events on a chromosomal basis. The x-axis contains an ordered list of chromosomes , each sized by the number of base pairs (bp) it contains. The y-axis is ordered by variant allele frequency (VAF), and the color scale indicates sequence depth. Each variant is a point in the plot.Genomic mutational overview. A genomic mutational overview of two experiments is computed and displayed. Click here for filePresentation on the x-axis compared to Relapse on y-axis. Both the \u00d7 and y-axes are based on VAF. Variants are colored to indicate whether they are shared or unique. See legend for color assignments.Scatter plot of paired samples. Displayed are variants in the Click here for fileMYC oncogene with mutation showing possible splicing events. Illustrated is a lolliplot diagram of MYC showing the possible missense mutations in the coding region depending on splicing.Click here for fileiCloneViz pseudocode flow diagram. Illustrated is the execution flow of iCloneViz and its associated subroutines. Each subroutine and its formal parameters is represented as a node. Each arc represents a subroutine call from one subroutine to another.Click here for file"} +{"text": "In Thailand, porcine deltacoronavirus (PDCoV) was first identified in November 2015. The virus was isolated from piglets experiencing diarrhea outbreak. Herein, the full-length genome sequence of the Thai PDCoV isolate P23_15_TT_1115 is reported. The results provide a clearer understanding of the molecular characteristics of PDCoV in Thailand. Deltacoronavirus, family Coronaviridae (\u2013\u2013Porcine deltacoronavirus (PDCoV) is an enveloped, single-stranded, positive-sense RNA virus in the genus aviridae \u20133. PDCoVviridae \u2013. The virviridae \u2013\u201310. Cliniridae \u2013\u2013, 11,12. A surveillance study was conducted to identify the PDCoV presence in Thailand that focused on herds with diarrhea outbreak and low mortality. Six intestinal samples were collected from piglets with diarrhea in five herds experiencing diarrhea outbreak. Total RNA was extracted and detected the presence of PDCoV RNA using primers specific to membrane (M) and nucleocapsid (N) genes. PCR positive samples were further investigated. The full-length genome was sequenced using 16 overlapping regions of each genome, cloned in pGEM-T easy vector (Promega), and sequenced in both directions in triplicate according to the previously reported protocol . The 5\u2032\u00a0The full-length genome of the Thai PDCoV isolate P23_15_TT_1115 was characterized. The full-length genome sequence of P23_15_TT_1115 is 25,402 nucleotides (nt) in length. Genome organization of the isolate resembles that of other PDCoV genomes with the following gene order: 5\u2032 untranslated region (UTR), open reading frame 1a/1b (ORF 1a/1b), spike (S), envelope (E), membrane (M), nonstructural protein 6 (Nsp6), nucleocapsid (N), nonstructural protein 7 (Nsp7), 3\u2032 UTR. The lengths of ORF 1a/1b, S, E, M, and N genes are 18,786; 3,477; 249; 651; and 1,026\u00a0nt, respectively. The phylogenetic tree was constructed based on full-length PDCoV genomes of 23 isolates available in GenBank, and phylogenetic analysis demonstrates that the P23_15_TT_1115 isolate belongs to a group separated from PDCoVs reported in both China and the United States.51N) amino acid at position 51, similar to isolates from China.The full-length genome of P23_15_TT_1115 was compared to 23 isolates available in GenBank. P23_15_TT_1115 was more highly homologous to PDCoV isolates from China, with nucleotide and amino acid similarities at 97.2 to 97.8% and 93.0 to 94.0%, respectively. In comparison, P23_15_TT_1115 shares a similarity with isolates from the United States. Moreover, the genetic analysis based on the S gene demonstrated that P23_15_TT_1115 is closely related to China PDCoV with similarities of 95.6 to 96.7% and 95.9 to 98.1% at the nucleotide and amino acid levels, respectively. Twenty-four substitutions at the amino acid level were observed between P23_15_TT_1115 and the isolates from China. Moreover, P23_15_TT_1115 owns a deletion of 1 (The results in this study suggest that P23_15_TT_1115 is a novel isolate, closely related to PDCoV isolates from China. The studies investigating the molecular epidemiology, prevalence, and evolution of PDCoV in Thailand are urgently required.KU984334.The complete genome sequence of P23_15_TT_1115 has been deposited in GenBank under the accession number"} +{"text": "Aspergillusflavus (A. flavus)\u2013peanut pathosystem, development and metabolism of the fungus directly influence aflatoxin contamination. To comprehensively understand the molecular mechanism of A. flavus interaction with peanut, RNA-seq was used for global transcriptome profiling of A. flavus during interaction with resistant and susceptible peanut genotypes. In total, 67.46 Gb of high-quality bases were generated for A. flavus-resistant (af_R) and -susceptible peanut (af_S) at one (T1), three (T2) and seven (T3) days post-inoculation. The uniquely mapped reads to A. flavus reference genome in the libraries of af_R and af_S at T2 and T3 were subjected to further analysis, with more than 72% of all obtained genes expressed in the eight libraries. Comparison of expression levels both af_R vs. af_S and T2 vs. T3 uncovered 1926 differentially expressed genes (DEGs). DEGs associated with mycelial growth, conidial development and aflatoxin biosynthesis were up-regulated in af_S compared with af_R, implying that A. flavus mycelia more easily penetrate and produce much more aflatoxin in susceptible than in resistant peanut. Our results serve as a foundation for understanding the molecular mechanisms of aflatoxin production differences between A. flavus-R and -S peanut, and offer new clues to manage aflatoxin contamination in crops.In the Aspergillusflavus (A. flavus) is a globally distributed filamentous, saprophytic fungus that frequently infects oil-rich seeds of various crop species during pre- and post-harvest, with subsequent production of mycotoxins such as cyclopiazonic acid, aflatrem, and the well-known aflatoxin . Be. Be36]. s so few that theubation) . In eachubation) ; furtherubation) .vs. af_S and T3 vs. T2 were identified using the DESeq R package . Only those genes with the corrected p (q) value < 0.05 were considered to be differentially expressed [A. flavus interaction with R and S peanut (vs. T2 offered insights into the metabolic and regulatory processes in A. flavus during its interaction with the peanut (10(RPKM+1) for the four A. flavus samples of af_R and af_S incubated for three and seven days in comparisons of af_R xpressed . The 179S peanut , while te peanut . To obseven days A. Similavs. af_S_T2), while 1791 DEGs were found between af_R and af_S at T3 (af_R_T3 vs. af_S_T3) and 45 DEGs were obtained between af_S at T3 and at T2 (af_S_T3 vs. af_S_T2), suggesting a markedly higher number of gene expression changes in af_R than in af_S. Eighteen DEGs exhibited common differential expression patterns in comparisons of af_R_T3 vs. af_R_T2 and af_S_T3 vs. af_S_T2 (vs. af_R_T2 were different from those in af_S_T3 vs. af_S_T2.Eleven DEGs were found between af_R and af_S at T2 (af_R_T2 af_S_T3) B. Furthe af_S_T3 B. The ab af_S_T2 C, implyivs. af_S_T2 and af_R_T3 vs. af_S_T3, respectively). The GO functional enrichment analysis of the 1113 (62.14%) DEGs with GO annotation in the af_R_T3 vs. af_S_T3 revealed significantly enriched terms in the biological process and the molecular function categories (vs. af_R_T2 and af_S_T3 vs. af_S_T2), respectively. Of the 263 (55.48%) DEGs with GO annotation in af_R_T3 vs. af_R_T2, only two GO terms (GO: 0003824 and GO: 0016491) in the molecular function category and three in the biological process category were significantly enriched enrichment analysis was performed using the GOseq method in Blast2GO . We firstegories . Catalytenriched . Other tvs. af_S_T2, af_R_T3 vs. af_S_T3, af_R_T3 vs. af_R_T2 and af_S_T3 vs. af_S_T2 were analyzed to identify their associated KEGG metabolic pathways. Consistent with the results of the GO analysis, no KEGG pathways were significantly enriched in DEGs data obtained from af_R_T2 vs. af_S_T2 and af_S_T3 vs. af_S_T2. We found that 14 pathways were significantly enriched in DEGs between af_R_T3 and af_S_T3, including 12 pathways involved in the supply of nutrient and energy for fungal development, biosynthesis of secondary metabolites (afv01110) and peroxisomes (afv04146) .To further investigate the biological functions and interactions of genes, a Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway enrichment analysis was conducted using KEGG Orthology Based Annotation System (KOBAS) . All DEGfv04146) A. Twentyfv04146) B. AlthouA. flavus was significantly changed when the fungus interacted with R and S peanut encoding the alpha-N-arabinofuranosidase was identified in af_S_T3 vs. af_S_T2 and it was up-regulated in this comparison. However, no mycelial growth-related DEGs were observed in neither af_R_T2 vs. af_S_T2 nor af_R_T3 vs. af_R_T2. Concurrently, transcriptions of conidia-specific genes, such as conidial hydrophobin RodA/RolA (AFLA_098380), conidiation-specific proteins and conidial development related genes such as AtfA (AFLA_031340) and PksP (AFLA_00617), were significantly changed to various degrees in different comparisons were up-regulated in af_R_T3 vs. af_R_T2. However, no DEGs involved in conidial development were found in both af_R_T2 vs. af_S_T2 and af_S_T3 vs. af_S_T2.By analyzing gene expression pattern data obtained from deep sequencing, especially those in the list of the 1926 genes that were significantly differentially transcribed , we founS peanut . SeventeS peanut . All 17 parisons . Six of vs. af_S_T2, af_R_T3 vs. af_S_T3, af_R_T3 vs. af_R_T2 and af_S_T3 vs. af_S_T2, respectively. Some of these DEGs were differentially expressed in two or three different comparisons. Additionally, the 54#, 9# and 26# cluster with eight, seven and six DEGs, respectively, were the first three dominant ones in these 36 secondary metabolism gene clusters; while 14 of the 36 clusters only possessed one DEG. The aflatoxin biosynthetic pathway cluster (54#) was most worthy focused on because the carcinogenic, mutagenic aflatoxin has been characterized in A. flavus [vs. af_S_T2, but none were differentially expressed between these two pathosystems. In af_R_T3 vs. af_S_T3, 18 down- and 15 up-regulated genes in 54# cluster were obtained. Interestingly, three genes were significantly down-regulated. Surprisingly, we also identified other two significantly down-regulated genes (AFLA_112820 and AFLA_050450) involved in aflatoxin biosynthesis of the aflatoxin biosynthetic cluster (54#) was significantly enriched in oxidoreductase activity (GO: 0016491). The aflX/ordB gene (AFLA_139160) encoding a monooxygenase participates in aflatoxin biosynthesis [vs. af_R_T2. Interestingly, aflA/fas-2 (AFLA_139380), aflG/avnA (AFLA_139260), aflN/verA (AFLA_139280) and aflP/omtA (AFLA_139210) of the 54# cluster, which were differentially up-regulated in af_R_T3 vs. af_R_T2, were enriched in different biological process and molecular function (GO: 0003824 and GO: 0016491) categories. Similar to the results of the GO analysis, several KEGG pathways were enriched in DEGs between af_R_T3 and af_S_T3 and between af_R_T3 and af_R_T2, whereas no KEGG pathways were significantly enriched in comparisons of af_R_T2 vs. af_S_T2 or af_S_T3 vs. af_S_T2. The biosynthesis of secondary metabolites (afv01110) was significantly enriched in DEGs between af_R_T3 and af_S_T3; moreover, this pathway was enriched in DEGs AFLA_069370, AFLA_070820 and AFLA_116080 in secondary metabolite clusters 24#, 25# and 41#, respectively. The aflatoxin biosynthetic pathway is a complex secondary metabolic process that is regulated and influenced by over 30 genes in the A. flavus genome [vs. af_S_T3 and af_R_T3 vs. af_R_T2, the biosynthesis of secondary metabolites (afv01110) pathway was not found to be enriched in these DEGs. Taken together, these results implied that a greater number of repressed responses took place in af_R compared with af_S, while many more activated responses in af_R than in af_S as the interactive time increased in the A. flavus-peanut pathosystem.The ynthesis . By conts genome ,49. AlthA. flavus than the R, namely, mycelia of A. flavus can much more easily penetrate the S than the R peanut seed.Nutrients are indispensable elements required for the growth and metabolism of all living organisms, including plants and pathogens. For successful infection of the host plant and establishment of disease, fungal pathogens have evolved complex regulatory mechanisms to facilitate penetration, colonization and absorb nutrition for development and metabolisms, meanwhile to protect themselves against host defensive responses ,51,52,53Aspergillus [vs. af_S_T3, five down-regulated DEGs involving in aflatoxin biosynthesis were found. Among them, aflX/ordB (AFLA_139160), aflNa/hypD (AFLA_139270) and aflD/nor-1 (AFLA_139390) belong to the aflatoxin biosynthetic cluster. The oxidoreductase Nor-1 together with NorA and NorB reduce norsolorinic acid, the first stable intermediate in the aflatoxin biosynthesis, to averantin [aflA, aflC, aflG, aflP, aflN and aflCa) in the 54# cluster were up regulated in af_R_T3 compared with af_R_T2. The first step in aflatoxin biosynthesis is the reaction of acetyl-CoA and malonyl-CoA catalyzed by Fas-1/aflB and aflA/Fas-2 to form the starter unit hexanoate [aflG/avnA), monooxygenase (aflN/verA) and O-methyltransferase A (aflP/omtA) enzymatic reactions are respectively involved in the conversion of averantin to 5-hydroxyaverantin, versicolorin A to demethyl-sterigmatocystin and sterigmatocystin to O-methyl sterigmatocystin in aflatoxin biosynthetic pathway [Aflatoxins are biosynthesized through several enzymatic reactions in mycelia of ergillus ,49 and tergillus ,58,59,60verantin . Expressexanoate , followeexanoate . P450 mo pathway ,48. AlthA. flavus, germinate as mycelia to colonize the host plant. The survival ability of A. flavus conidia under severe environmental conditions is stronger than that of mycelia [A. flavus dominantly depends on the dispersal of conidia by air, water and soil movement, rain splash and biotic factors [A. flavus [A. flavus may form conidia in cavities or intercellular spaces of the cotyledon. The formation of conidia in A. flavus requires the concerted activity of numerous signaling proteins and transcription factors [atfA (AFLA_031340) was also down-regulated in af_R_T3 vs. af_S_T3. Interestingly, AtfA, as a bZIP transcription factor, possesses important functions in conidial development [vs. af_R_T2, 5 up-regulated DEGs related to conidial development were obtained, including 4 conidia-specific genes and one conidial yellow-pigment biosynthesis-related gene pksP/alb1 (AFLA_006170). The pksP/alb1 gene encodes a polyketide synthase (PksP) involved in the first step of conidial pigment biosynthesis. PksP catalyzes the reaction of acetyl coenzyme A (acetyl-CoA) andmalonyl-CoA to form the heptaketide naphthopyrone [A. flavus mycelial growth, conidial formation and aflatoxin production during infection and colonization the peanut need to be uncovered in order to paint a complete picture of the interactive mechanism of A. flavus with peanut. This comprehensive transcriptional profiling of A. flavus during interaction with the peanut should advance our fundamental understanding of the various associated genes and major metabolic pathways, thereby providing a direction for further study on the management of aflatoxin contamination in crops.Conidia, the asexual reproductive structure of mycelia . In addi factors . Previou. flavus , with th factors . Transcr factors were sigelopment and stabelopment . Considehopyrone ,68. Our A. flavus during the fungus interaction with the peanut. The research demonstrated that the global transcriptional analysis provided an exhaustive view of genes involved in development of mycelia and asexual spores, controlling of biosynthesis and activities of enzymes, conidial pigments and secondary metabolites processes, which were coordinately influenced in A. flavus by its host peanut (R and S genotypes). The transcriptome comparisons revealed that DEGs associated with mycelial growth and penetration, conidial formation and development, and aflatoxin biosynthesis and accumulation were up-regulated in af_S compared with af_R. This differential transcription may explain why aflatoxin accumulation was much higher in A. flavus-S peanut pathosystem than in A. flavus-R. However, further research is required to determine whether these DEGs are the genes responsible for the difference in aflatoxin accumulation between A. flavus-R and A. flavus-S pathosystems. Further functional exploration of these genes may provide useful information for their future application in the management of aflatoxin contamination in crops.In this study, an RNA-seq approach was employed for the first time to investigate molecular events involved in the development and metabolism of A. flavus was maintained in 20% glycerol at \u221280 \u00b0C at the Oil Crops Research Institute of the Chinese Academy of Agricultural Sciences (CAAS-OCRI). To prepare the A. flavus inoculation, the stored conidia of AF2202 were cultured on the potato dextrose agar medium for 7 d at 29 \u00b1 1 \u00b0C. The fresh conidia were then collected and suspended in sterile water containing 0.05% Tween-80. The concentration of conidia in the suspension was determined using a hemocytometer [The AF2202 strain of toxigenic A. flavus at post-harvest (6 CFU/mL) was then directly added to 10.0 g of peanut seeds in a sterile Petri plate. The inoculated samples were placed in an incubator and cultured at 29 \u00b1 1 \u00b0C in darkness. After incubation for 1, 3 and 7 d, the A. flavus-colonized seeds were taken out to test aflatoxin content (five replications) by high-performance liquid chromatography [The peanut cultivars Zhonghua 6 and Zhonghua 12 were cultivated and supplied by CAAS-OCRI . The mature seeds of both Zhonghua 6 and Zhonghua 12 are susceptible to seed invasion by -harvest , while ZA. flavus-R and -S peanut pathosystems, the aflatoxin content of these pathosystems was initially tested at the 2nd day after incubation. Aflatoxin content increased at maximum rate between the 3rd and the 4th day, and then remained stable after the 7th day in both peanut cultivars (our unpublished data). Beginning on the 2nd day, the aflatoxin content of the A. flavus-R pathosystem was far lower than that of the A. flavus-S; at its peak, the aflatoxin content of A. flavus-S was over 10-fold that of A. flavus-R. These differences in aflatoxin production between the two pathosystems suggested that the genetic expression of A. flavus was affected by its colonized host peanut. The 1st, 3rd and 7th day as the inflection time points .Although aflatoxin production trends differed between e points in the pA. flavus-peanut pathosystem was isolated using an RNeasy\u00ae Plant Mini kit , according to the manufacturer\u2019s protocol. All RNA samples were treated with RNase-free DNase I. A NanoDrop\u00ae 2000 spectrophotometer , a Qubit\u00aeFluorometer 2.0 and an Agilent 2100 bioanalyzer were used to test the concentration and integrity of RNA samples, and confirm that all RNA samples had an integrity value > 6.5. RNA quality detection, cDNA libraries construction and RNA sequencing were performed at the Novogene Bioinformatics Technology Co. Ltd. according to previously described methods [Total RNA from the methods .Raw data (raw reads) in fastq format were first processed using in-house perl scripts. Clean data (clean reads) were obtained by removing low-quality reads and those containing adapters, poly-N tails from the raw data. The Q20 and Q30 values, GC content, and sequence duplication levels were calculated for the clean data. All downstream analysis used the clean data with high quality. The sequencing data generated in this study have been deposited at the NCBI Short Read Archive database and are accessible through SRA series accession number SRP065525 (BioProject ID: PRJNA300619).A. flavus were downloaded directly from the Ensembl Genomes website [The genome and gene annotation files of website . After c website , paired- website . The ref website . HTSeq ( website .p (q) value < 0.05.Statistical analyses for discovering differentially expressed genes (DEGs) were performed with the DESeq R package . To evaluate the individual effects of the host peanut (Zhonghua 6 and Zhonghua 12) and time points (T1\u2013T3), a multifactorial analysis was conducted using the multi-factor designs method of DESeq . This mep (q) value < 0.05 were considered to be significantly enriched in DEGs. KOBAS software [p (q) value < 0.05 was considered to be significantly enriched in DEGs [GO enrichment analysis of DEGs was implemented using GOseq with a ca, 2014) was used in DEGs ."} +{"text": "Pseudomonas aeruginosa bacteriophage (phage) belonging to the Pbunalikevirus genus of the Myoviridae family of phages. It was isolated from hospital sewage. vB_PaeM_CEB_DP1 is a double-stranded DNA (dsDNA) phage, with a genome of 66,158 bp, containing 89 predicted open reading frames. vB_PaeM_CEB_DP1 is a Pseudomonas aeruginosa PAO1 as the host strain. Its host range was evaluated using a panel of 30 P.\u00a0aeruginosa clinical isolates, and this phage was able to infect approximately 57% of them.The lytic bacteriophage vB_PaeM_CEB_DP1 was isolated from hospital sewage in Portugal using Myoviridae family of phages. Furthermore, the growth parameters determined by the one-step growth experiment showed that phage vB_PaeM_CEB_DP1 has a latent period of ~50\u00a0min, a rise period of ~50\u00a0min, and a burst size of ~70 phages per infected cell.The morphological characterization of phage vB_PaeM_CEB_DP1 was performed by transmission electron microscopy, revealing an icosahedral head of ~70\u00a0nm in diameter and an ~140- \u00d7 18-nm contractile tail. Thus, it was possible to determine that this phage belongs to the The phage genome was sequenced using Roche 454 sequencing procedures at the Plateforme d\u2019analyses of the Institut de Biologie Int\u00e9grative et des Syst\u00e8mes . Shotgun reads were assembled using the gsAssembler module of Newbler v 2.5.3.Pbunalike phages were first annotated using myRAST . Sequence phages . Predicte phages . The tooe phages was usede phages , 6, 7, nThe genome of the phage vB_PaeM_CEB_DP1 consists of 66,158\u00a0bp of dsDNA with a GC content of 55.6%. The whole genome was scanned for CDSs, resulting in 89 predicted genes ranging from 141 bp to 3,111\u00a0bp. Furthermore, 37 of these genes are rightward oriented while 52 are leftward oriented. The initiation codon of 90% of the genes is ATG, while 8% start with GTG and 2% with TTG. According to BLASTP analyses, 68% of the proteins encoded in the genome of vB_PaeM_CEB_DP1 are hypothetical. This study further revealed that this phage has 7 predicted promoters and 12 terminators.Pbunalike genus are reported to encode linear, nonpermuted genomes , KPP12 (94.3%) and vB_PaeM_PAO1_Ab27 (93.1%).The genome of phage vB_PaeM_CEB_DP1 shares high nucleotide identity with other P.\u00a0aeruginosa phage vB_PaeM_CEB_DP1 was deposited in GenBank under the accession number KR869157.The complete genome of the"} +{"text": "The effect of cannabis on emotional processing was investigated using event-related potential paradigms (ERPs). ERPs associated with emotional processing of cannabis users, and non-using controls, were recorded and compared during an implicit and explicit emotional expression recognition and empathy task. Comparisons in P3 component mean amplitudes were made between cannabis users and controls. Results showed a significant decrease in the P3 amplitude in cannabis users compared to controls. Specifically, cannabis users showed reduced P3 amplitudes for implicit compared to explicit processing over centro-parietal sites which reversed, and was enhanced, at fronto-central sites. Cannabis users also showed a decreased P3 to happy faces, with an increase to angry faces, compared to controls. These effects appear to increase with those participants that self-reported the highest levels of cannabis consumption. Those cannabis users with the greatest consumption rates showed the largest P3 deficits for explicit processing and negative emotions. These data suggest that there is a complex relationship between cannabis consumption and emotion processing that appears to be modulated by attention. There are a variety of explanations of how the brain processes emotion emphasizing differing levels at which an explanation is focused. Some approaches emphasize a physiological structural account, others are based on a higher, more \u201ccognitive\u201d level of understanding, with less emphasis on the underlying structures , 2, 3. A9-tetrahydrocannabinol (THC) consumed and participants\u2019 ability to identify emotional expressions in faces showing negative emotions such as fear and anger, but had little effect on faces showing sadness and happiness [The effects of cannabis on cognition and the brain is a rapidly evolving area of investigation which has provided evidence for a complex interaction between physiological and psychological processes. Cannabis consumption elicits both immediate (acute), residual and long-term changes in brain activity, that are manifested throughout the body such as altered appetite and food intake, altered sleep patterns, and changes in measures of executive function and emotional behavior . Crean eappiness . Behavioappiness . In a reappiness . This suBehavioral studies have also linked heavy cannabis use to impairments in emotion processing compared to controls. In a dynamic emotional expression task where participants were asked to identify emotional expressions as faces morphed from open mouthed to an expression, either positive (happy) or negative (fearful) cannabis user\u2019s accuracy and reaction time performance was impaired. Individuals who used cannabis fifteen times a month and more than fifty times in their lifetime showed increased reaction times and decreased accuracy to the faces that became negative .Recent research seeking to clarify the effects of cannabis use on brain structure is mixed. In particular, structural magnetic resonance imaging (MRI) data indicated no significant differences in the amygdala, a critical structure for emotion processing, in both adult and adolescent brains of daily users . HoweverOne approach that can be considered a valuable tool in further investigating the complexity of emotion processing is event related potential (ERP) methodologies. Structural changes in the brain do not always translate to differences in function and behavior. EEG techniques allow us to investigate the relationship between biomarkers (brain mechanisms) of a distributed neural network and associated behavior. This approach is particularly relevant to determine whether or not cannabis use has an effect on the circuitry of the brain when structural differences have not been found.EEG recordings provide information with resolution in the milliseconds. This approach consists of the measurement of the summated firing of neurons in the cortex through electrodes placed on the scalp. When this activity is averaged and time-locked to a specific event , an event-related potential (ERP) is obtained. ERPs provide an indication of the temporal dynamics of cortical activity following that specific event, allowing us to obtain information on the time course of emotion processing. This average is then compared to those obtained for other conditions to examine whether processing of different types of information diverges in time and across electrode sites. Through extensive research some patterns in ERPs associated with cognitive processes have been characterized by ERP components\u2014the amplitude of the waveform at a specific point in time. For example, the P3 or P3 complex component is defined as a change in voltage in a positive direction (increase in amplitude) occurring between 200\u2013400ms on average after stimulus onset .One ERP in particular is of interest to emotion processing and also cannabis use. The P3 complex component has been associated with task-relevant stimulus evaluation and attention allocation and it is has been consistently linked to emotion processing , 19, 20.The exact nature of the P3 and its relationship to emotion is still very much under investigation . HoweverCannabis use has been associated with deficits in the P3 especially pertaining to complex cognitive tasks such as working memory, and selective attention , 31, 32.The consistent P3 pattern exhibited during emotional processing represents an effective means of assessing the effects of cannabis use on neural processing. The P3 has implications for understanding both the short and long-term effects cannabis use has on emotion processing. Despite this the majority of research into the effects of cannabis on emotional expression recognition has focused on either behavioral approaches, for example Ballard et al , or struTo better understand the role of attention on the effects of cannabis in emotion processing our current study uses a paradigm based on Relleke et al and exteIf cannabis use effects emotion processing we would expect to see differences in the P3 to our emotional stimuli for our cannabis user group compared to controls. If attention is also affected by cannabis use, then greater demands on attention, driven by the levels of attentional engagement, elicited by our three processing tasks, implicit, explicit and empathic would also lead to P3 differences.We therefore hypothesize that cannabis users will show a difference in P3 amplitude compared to non-cannabis using controls. This will be further influenced by the type of emotion and task demands.Seventy-three undergraduate students and volunteers recruited from the community provided written consent and self-report of demographic information which is summarized in Undergraduate students who were recruited from the departmental research pool received credit in a Psychology course for their participation. Participants from the local community received no compensation. The study was approved by Colorado State University\u2019s Office of Research Integrity & Compliance Review Office Institutional Review Board (IRB) Protocol ID: 12-3716H.All participants provided written consent and completed a general demographic questionnaire. They were further screened for symptoms of depression and anxiety using the Center for Epidemiological Studies Depression Scale (CES-D) and the After the screening and assessment portion of the study was completed, participants were fitted with a recording EEG cap, detailed below in the EEG acquisition portion of the methods. During the recording, detailed below, participants completed an emotion processing task, presented on a Dell desktop computer at a viewing distance of 30cm using Stim2 software , 41.The emotion processing task required that participants viewed faces depicting positive (happy), neutral, and negative (angry and fearful) emotional expressions, obtained from the Radboud Faces Database . Thirty-After EEG data collection, cannabis use was assessed by self-report using a questionnaire developed specifically for this study; The Recreational Cannabis Use Evaluation (R-CUE). R-CUE was created to better understand the ecology of cannabis use in Colorado\u2019s recreational system of high volume, potency, and variety. Thus, R-CUE consists of questions regarding type of use and method of intake\u2014including use of edibles, concentrates, and transdermal applications\u2014in addition to information about potential current use, relational previous use, and years of use. Based on participants\u2019 responses they were assigned to control or cannabis user groups see . CannabiElectroencephalogram (EEG) was recorded from 25 Ag/AgCl electrodes covering regions of interest (ROI\u2019s) consistent with the measurement of the P3 ERP component and identified based on previous research , 7, 41 in milliseconds, percent correct for sex and emotion identification tasks, and average rated empathy for the empathic task. Participants with over 90% \"no responses\" on any of the three task conditions were excluded from analysis, as were individual trials with RTs faster than 100ms. Repeated measures analyses of variance were performed by cannabis use grouping and emotion for RT, accuracy, and rated empathy scores during each task.t-tests on significant differences with \u03b1 = 0.05 for planned group comparisons and Bonferroni correction for post-hoc tests where appropriate. Eta-squared measures of effect size were reported for all within-group factors, while Cohen's d was reported for between-group effects.EEG was re-referenced offline to the common average and baseline corrected to pre-stimulus interval of 200ms. Artifact rejection was applied using the built in artifact rejection tool in SCAN 4.5 EEG acquisition software . Trials Overall differences in behavioral measures were consistent with differences in ERPs in relation to task, and emotion. This suggests that ERP differences were driven by cannabis exposure and not by differences in responses to emotion or task. A detailed description of the results are as follows.In the emotion processing task we examined the effect of cannabis use grouping and emotional expression on participants\u2019 reaction time (RT) in milliseconds and response scores in a repeated measures analysis of variance see & 3.F = 11.977, p = .001. Emotional expressions did not differ in RT during implicit processing, but they did have a significant effect on RT for explicit and empathic responses. Participants\u2019 responses were the slowest for emotion identification of neutral expressions (angry: t(63) = -8.785, fearful: t(63) = -7.401; all ps < .001), with similar RTs for angry and fearful expressions, t(63) = -1.772, p = .081, and the fastest for happy expressions = -5.120, angry: t(63) = -11.532, fearful: t(63) = -12.379; all ps < .001). Empathic rating scores also differed by emotion, with slower RTs for neutral than angry expressions, t(63) = -2.814, p = .007, and slowest for happy compared to negative expressions (vs. angry: t(63) = -8.785, fearful: t(63) = -7.401, ps < .001), with a similar trend compared to neutral, t(63) = -2.447, p = .017. RTs for fearful expressions during the empathy task did not differ from neutral, t(63) = -1.751, p = .085, or angry expressions, t(63) = -1.735, p = .088.A significant effect of sex on RT for implicit processing was characterized by slower RT for sex identification of female compared to male faces, ps < .05): emotion identification accuracy scores were lowest for angry faces (vs. happy: t(63) = 6.838, p < .001; neutral: t(63) = 3.209, p = .002; fearful: t(63) = 5.933, p<001), lower for neutral than fearful faces (t(63) = -2.700, p = .009), and no other differences for happy faces = 1.149, p = .255; fearful: t(63) = -.849, p = .399). Further, a two-way interaction of emotion by sex indicated greater accuracy for male angry faces than female angry faces, t(63) = 2.965, p = .004, d = .27. The empathy task was characterized by lowest rated ability to empathize for neutral expressions (vs. happy: t(63) = -8.997, angry: t(63) = -5.882, fearful: t(63) = -6.640; all ps < .001), higher ratings for happy than angry expressions, t(63) = 3.733, p < .001, and no other differences for fearful expressions (vs. happy: t(63) = 2.193, p = .032; angry: 2.182, p = .033). A main effect of sex on empathy ratings suggested slightly higher rated ability to empathize with female than male faces, independent of emotion, F = 4.106, p = .047. There were no significant effects of emotion or sex on response scores for implicit trials.A main effect of emotion on response scores was significant for explicit and empathic tasks = 80.518, p = 001, \u03b7p2 = .542 = 3.498, p = .001, Cohen\u2019s d = -.74, fronto-central, t(68) = 3.937, p < .001, d = -.56, and parieto-occipital sites t(62) = 4.201, p < .001, d = -.93, but not at parietal, t(68) = 2.233, p = .029, d = .20, or centro-parietal sites, t(68) = .955, p = .343, d = .56 (see t(65) = -3.992, p < .001, d = .86) and fearful expressions (t(67) = -3.449, p = .001, d = .76), with a similar trend of smaller P3 for angry expressions (t(68) = 2.546, p = .013, d = .58) but no group differences for neutral expressions, t(68) = 2.347, p = .022, d = .63 (see t(62) = 3.289, p = .002, d = .70, and explicit, t(67) = 3.251, p = .002, d = .72, but not empathic processing, t(68) = 2.046, p = .045, d = .51; while right sites showed no group differences (implicit: t(68) = 1.622, p = .109, d = .40, explicit: t(68) = 2.283, p = .026, d = -.40, empathic: t(68) = 2.192, p = .032, d = .55).Cannabis users presented smaller P3 than controls over frontal, .56 see . An intet(69) = -3.151, p = .002. In contrast, P3 at fronto-central sites was enhanced for implicit compared to explicit, t(69) = -2.430, p = .018, and empathic processing, t(69) = -2.430, p = .018 (see t(69) = -3.880, p < .001, and fearful, t(69) = -3.142, p = .002, expressions, with no amplitude differences present for angry, t(69) = -.560, p = .577, or happy expressions, t(69) = .328, p = .744.A main effect of task instructions was characterized over centro-parietal sites as a reduction in P3 during implicit compared to explicit processing of facial expressions, .018 see . Fronto-ps < .01): neutral expressions elicited a larger P3 than fearful = -3.482, p = .001) and angry = -3.055, fronto-central: t(69) = -3.099, parietal: t(69) = -3.578, parieto-occipital: t(69) = 4.540; all ps < .003), but not happy expressions (happy-angry: t(69) = -.766, p = .446; happy-fearful: t(69) = -.954, p = .343; angry-fearful: t(69) = -.133, p = .895). At fronto-central sites, these effects were modulated by a task by emotion interaction, such that during implicit processing, P3 amplitude for happy and angry expressions was reduced compared to neutral expressions (happy: t(69) = -3.436, p = .001; angry: t(69) = -3.296; p = .002), with a trend for fearful compared to neutral (t(69) = -2.807, p = .006), and no differences between emotional expressions (happy-angry: t(69) = -.128, p = .899; happy-fearful: t(69) = -.694, p = .490; angry-fearful: t(69) = -.758, p = .451). Explicit processing presented a similar reduction only for angry expressions compared to neutral, t(69) = -2.730, p = .008, with smaller P3 for happy and fearful expressions compared to neutral approaching significance (happy: t(69) = -2.552, p = .013; fearful: t(69) = -2.387, p = .020), and no other significant differences (happy-angry: t(69) = .089, p = .899; happy-fearful: t(69) = .524, p = .490; angry-fearful: t(69) = 1.150, p = .451). Effects of emotion on P3 for the empathy task were not significant after correction for multiple comparisons, F = 3.352, p = .023. However, there was a trend suggesting smaller P3 for neutral compared to happy expressions, t(69) = 2.322, p = .023, and also a smaller P3 for happy compared to angry expressions, t(69) = -1.328, p = .008, while no other comparisons approached significance.P3 differences between expressions were observed across all but centro-parietal electrode sites (N = 20) and chronic users , and compared P3 amplitudes between the two groups and controls . In this manner, significant group effects for cannabis use were further examined using one-way analyses of variance with follow-up t-tests to explore the possible role of cannabis use frequency in this relationship.To expand on the patterns found in P3 amplitude effects of cannabis use, we examined differences within the cannabis users group based on frequency of use. For this purpose, we divided participants in this group into casual users : frontal = 4.645, p = .013, \u03b7p2 = .122; fronto-central = 5.808, p = .005, \u03b7p2 = .148; parieto-occipital = 4.505, p = .015, \u03b7p2 = .119) with no group effects at parietal, F = 2.868, p = .064, \u03b7p2 = .079, or centro-parietal sites, F = .777, p = .464, \u03b7p2 = .023. However, no comparisons between the three frequency use groups reached significance. Next, we compared mean P3 amplitude between groups at parieto-occipital sites for each emotional expression and found significant effects of reduced P3 amplitude for happy and fearful expressions = 6.121 & 5.416, ps = .004 & .007, \u03b7p2 = .073), but not for neutral, F = 2.887, p = .063, or angry expressions, F = 3.699, p = .030. Specifically, there was a trend for reduced P3 amplitude in both casual and chronic users compared to controls for happy and fearful expressions, with no differences between casual and chronic users and no differences for neutral and angry expressions.For overall P3 amplitudes, group effects remained significant across the same electrode sites responses were affected by explicit and empathic processing in respect to neutral and negative emotions, with angry and fearful faces giving rise to slower RT responses than happy faces in the explicit processing condition. This effect was reversed in the empathy condition where happy faces gave rise to the slowest RT responses. A similar pattern of response was observed in accuracy ratings with angry faces producing the lowest response scores in explicit and empathic processing. There were however no between group differences with cannabis users presenting the same pattern of responses as non-cannabis users. Unlike previous studies we saw aSignificant differences in ERPs were observed for group (cannabis users compared to controls), for task , and for emotion . Specifically, the P3 was significantly reduced in mean amplitude in our emotion processing paradigm for those participants that used cannabis compared to participants who did not. Happy expression eliciting the largest P3, followed by fear and angry expressions in users compared to controls. Differences were further modulated by task: in the implicit and empathic task conditions cannabis had a significant decrease in the P3 compared to controls when processing emotional expression. However when attention was directed in the explicit processing task they have similar P3 responses to emotional expression as controls. When comparing levels of cannabis exposure as a possible factor in our cannabis use group it appears that those who use cannabis casually have greater deficits in emotion processing with a reduced P3 response generally for all emotion conditions. This suggests that possibly increased exposure in our chronic group may have developed compensatory mechanisms to the effects on cannabis on emotion processing. However it is important to acknowledge that acute, residual and long term exposure as well as the specific endocannabinoids and other confounding compounds such as alcohol and tobacco were not specifically controlled for a priori in this study.The dissociation between performance on behavioral measures and corresponding P3 amplitude data are interesting. One possible explanation is that exposure to cannabis alters the way in which the brain allocates resources during emotion processing as measured by the P3 component , 19, 20,Our results are consistent with other studies investigating the effects of cannabis on the P3 as a marker for a number of cognitive processes, especially attention based tasks. Cannabis exposure has been shown to give rise to a marked decrease in the P3 component in a number of attentional tasks both visual and auditory , 32, 33.Task driven differences in the P3 were apparent in our data for both cannabis users and control conditions. With a reduction in the P3 for implicit processing compared to explicit and empathic processing in both groups. This is consistent with previous work by Rellecke et al where implicit and explicit processing of emotional expression gave rise to different patterns of ERP . Our datOur implicit, explicit and empathic tasks were sensitive to cannabis exposure, as marked by a decrease in the P3, which is consistent with the literature indicating that visual , 31 and This suggests that consistent with previous research the effects of cannabis may be more closely tied to attention-based emotional processing rather than emotion processing that is occurring when attention is not directed towards a specific emotion , 31, 32.This has implications for structural accounts of emotion processing . ArguablThe P3 complex also appears to be modulated by the type of emotion being processed with negative emotions having a greater effect on P3 than positive and neutral emotions in our cannabis user group, an effect which is task dependent. This is consistent with the literature showing effects of cannabis on negative emotion , howeverAs previously mentioned frequency of use appears to have an emotion specific effect. Although not significant, the reduction in P3 amplitude to fearful, happy and angry emotional expressions, is greatest in causal users compared to chronic users and controls. This is consistent with Ballard and colleagues who found the effects of cannabis use on emotion processing to be exacerbated for negative but not positive emotions . Follow Ability to empathize, measured as a behavioral response, appears consistent across groups for positive emotions. There was an expected reduction in ability to empathize with neutral expressions in all groups compared to positive and negative valence emotional faces. P3 effects did not reach statistical significance however there was a trend towards a greater P3 response for empathic responses to negative emotions compared to positive and neutral emotions in both groups. Further, while cannabis use was associated with differences in P3 during sex and emotion identification, empathic processing of the same facial expressions resulted in a reduction of these group differences. This supports previous discussion of the role of attention on emotion processing differences in the P3 for cannabis users , 31, 32 One significant limitation of our study was not explicitly controlling for the acute, residual or long term effects of cannabis exposure although the majority of our cannabis users patterns of exposure best fit the residual and long term effects definition. Similarly the inability to control the amount and the type of cannabinoid our sample was exposed to makes our conclusions difficult to evaluate. This is both problematic and yet realistic as recreational cannabis users are unlikely to be able to access this information. Testing of recreational and medical cannabis in Colorado dispensaries for consumers is minimal if not nonexistent at this time. For example, THC content can range from 6\u201312%, to as high as 90%, in concentrates and edibles. We also did not control a priori for exposure to other substances such as alcohol and tobacco use. Although our sample reported minimal exposure to other substances, the residual effects of these substances are well documented to affect cognition, and could be contributing to our results. One particularly interesting point regarding our sample was that they were all using cannabis legally according to Colorado state law. It has become anecdotally apparent that this population tend to restrict their substance exposure to cannabis exclusively. However it should be acknowledged that this is a limitation of our study. As we develop a better understanding of the legal recreational cannabis industry we expect to be able to better address some of these questions.Our data show a significant effect of cannabis use on the P3 in an emotion processing task which was further modulated by task instruction. The P3 amplitude was reduced for negative emotions in our cannabis user group, compared to controls, and this effect was greatest when processing emotional expressions implicitly. There was a trend towards this being a dose dependent relationship with those users self-reporting the greatest exposure to cannabis having larger decrements in P3 amplitude. Attention driven demands on emotion processing appear to be affected by cannabis use as reflected in differences in the P3 amplitude.The Recreational Cannabis Use Evaluation (R-CUE)Cannabis Use Questionnaire:1Age: _________2Are you part of Colorado\u2019s Medical Marijuana Registry (do you own a red card)?aYesbNo3If you answered yes to #2, how many years have you been a member of the registry?aLess than one year (This is my first red card)b1\u20132 yearsc3\u20134 yearsd5\u20137 yearse8\u201310 yearsf10+ years4How many years have you partaken in Cannabis use?aLess than a yearb1\u20132 yearsc3\u20135 yearsd6\u201310 yearse11\u201315 yearsf16\u201320 yearsg20+ years5How many times a week do you use Cannabis (in any form)?aOnce a week: ______bA couple of times a week: ______cA few (3\u20136) times a week:______dDaily: ______e2\u20134 times a day:_____fMore than 4 times a day:______6Which of the following ways do you like to intake cannabis, and which types of Cannabis do you prefer? Check all that apply (and check subcategories to the best of your knowledge/ability):aSmoking Cannabis flower : ______iIndicas (\u201cBody high\u201d): _____iiSativas (\u201cMind high\u201d): _____iiiHybrids: ______1Sativa dominant hybrids: ______2Indica dominant hybrids: ______3True hybrids (50/50 of each): ______bSmoking Cannabis Concentrates (Hashish/\u201dDabs\u201d) ________iType of cannabis in concentrate:1Indica: _______2Sativa: _______3Hybrids: ______aSativa dominant hybrids: ______bIndica dominant hybrids: ______cTrue hybrids (50% of each): ______4Strain specific: _______aIf yes to strain specific hash, list strains that you have used: ________________________________________________________________________________________________________________________________________________iiMethod of THC extraction (the type of Concentrate) Check all that apply:1Solvent based extraction:aButane Honey Oil (BHO): ____bCarbon Dioxide (CO2): ______cQuick Wash Isopropyl Alcohol (QWISO): _____dHexane solvent concentrates: _____ePropane solvent concentrates: ____fEthanol solvent concentrates: _____g\u201cShatter\u201d hash (High purity butane/ethanol extraction): _____2Solvent-less concentrates:aCold Water Extraction (CWE)/Icewax/Solvent-less wax/\u201dgrease\u201d/\u201djewce\u201d: _____bBubble hash: _____cScreen filtered hash (Finger hash/Keif): _____cCannabis Edibles: _______iBaked Edibles: _______iiHard Candy/ Gummy Edibles: ______iiiChocolate Edibles: ____ivDrink based edibles : ______vTinctures: _____1Glycerin based: _____2Ethanol based: _____viCannabis butter (Cannabutter): _____viiOther (Please describe): ____________________________________________________________________________________________________________________________________________________________________________________dDermal Cannabis Application: ______iCannabis skin patches: _____iiCannabis lotions/balms/oils: ______7If you selected any of the flower/concentrate methods of cannabis intake, what smoking devices do you use? aWater-filtration devices:iBong (upright/waterpipe):_____iiBong (gravity):_____iiiBubbler: _____bDry smoking devices:iPipe : ______iiSteamroller: ______iiiJoint: ______ivBlunt:______cVaporizers:iBag vaporizers: ______iiWhip vaporizers: ______iiiPortable/Pen vaporizers:________dDabs:iSpoon dabs:_____iiNail dabs: ______iiiNoodle dabs: ______ivHealth stone dabs: _____vSkillet dabs: _______8In order of preference , what is your preferred form of consuming Cannabis: Cannabis flower/nugget, concentrates/hash, edibles, and topical absorption?1____________________2____________________3____________________4____________________9In order of preference , which is you preferred method(s) of smoking/ingesting Cannabis?1_____________________2_____________________3_____________________4_____________________5_____________________6_____________________7_____________________8_____________________9_____________________10_____________________10In an average month, how much in Cannabis flower/nugget do you smoke?aNone:_____bA gram or less: _______cAn eighth of an ounce (3.5 grams) or less:_____dA quarter of an ounce (7 grams) or less:_____eA half of an ounce (14 grams) or less:_____fAn ounce (28 grams) or less:_____gMore than an ounce: _____hMore than two ounces:_______iMore than a quarter pound (4 ounces):______11In an average month, how much in Cannabis concentrates do you smoke?aNone:______bA gram or less: _______cAn eighth of an ounce (3.5 grams) or less:_____dA quarter of an ounce (7 grams) or less:_____eA half of an ounce (14 grams) or less:_____fAn ounce (28 grams) or less:_____gMore than an ounce: _____12In an average month, how many Cannabis edibles do you consume? .aNone:____bOne edible:____c2\u20134 edibles:______d4\u20138 edibles:______e10\u201320 edibles:______f30+ edibles:________S1 Fig(TIF)Click here for additional data file."} +{"text": "Listeria monocytogenes is responsible for the rare disease listeriosis, which is associated with the consumption of contaminated food products. We report here the complete genome sequences of vB_LmoS_188 and vB_LmoS_293, phages isolated from environmental sources and that have host specificity for L. monocytogenes strains of the 4b and 4e serotypes. Listeria monocytogenes is a Gram-positive facultative anaerobe and the causative agent of listeriosis, a disease associated with the consumption of contaminated food products. Its psychotropic nature, coupled with its ability to persist in the environment and wild mushroom (vB_LmoS_188) samples. The genomes were sequenced by MWG Eurofins on an Illumina MiSeq next-generation sequencing (NGS) system to >100\u00d7 coverage. For each, sequencing yielded approximately 3 million reads, with an average length of 148\u00a0bp and an average quality score of 37. The removal of low-quality reads was undertaken using Trimmomatic (http://www.ebi.ac.uk/interpro).Two vB_LmoS_18 samplesvB_LmoS_23 and wilmmomatic and Artemmomatic , while fListeria bacteriophages , while bacteriophage vB_LmoS_293 is 40,759\u00a0bp in length . PCR analyses confirmed that both genomes contain linear circularly permuted double-stranded DNA (dsDNA) with terminal redundancy. Sixty ORFs were detected in vB_LmoS_188, while 72 ORFs were detected in vB_LmoS_293. The ORFs predominantly begin with the ATG start codon (91.6% in vB_LmoS_188 and 87.5% in vB_LmoS_293). No tRNAs were detected. No function was assigned to 34/60 ORFs detected in vB_LmoS_188 or for 41/72 ORFs in vB_LmoS_293. The genomes are ordered in a modular fashion, consistent with previous observations for iophages . DespiteLP-030-3 . Their sKP399677 (vB_LmoS_188) and KP399678 (vB_LmoS_293).The genome sequences of these two phages have been deposited in GenBank under the accession numbers"} +{"text": "Independent Component analysis (ICA) is a widely used technique for separating signals that have been mixed together. In this manuscript, we propose a novel ICA algorithm using density estimation and maximum likelihood, where the densities of the signals are estimated via p-spline based histogram smoothing and the mixing matrix is simultaneously estimated using an optimization algorithm. The algorithm is exceedingly simple, easy to implement and blind to the underlying distributions of the source signals. To relax the identically distributed assumption in the density function, a modified algorithm is proposed to allow for different density functions on different regions. The performance of the proposed algorithm is evaluated in different simulation settings. For illustration, the algorithm is applied to a research investigation with a large collection of resting state fMRI datasets. The results show that the algorithm successfully recovers the established brain networks. This manuscript puts forward two innovations. Firstly, we demonstrate a fast, likelihood motivated and straightforward method for applying independent components analysis (ICA). Secondly, we propose a parcellation based adjustment when the source signals distribute differently across regions. Our work is routed in the context of understanding human brain networks, and we use functional magnetic resonance imaging (fMRI) data for illustration in this manuscript.We approach our study of fMRI by simultaneously analyzing all voxels. This is in contrast to regional or seed-based approaches is a factor-analytic approach that has been frequently utilized for the analysis of functional neuroimaging data, because of its success in discovering important brain networks in many applications , while Section 5 gives a discussion.The remainder of the paper is organized as follows. Section 2 describes the p-spline based ICA algorithm and considers relaxation of the Xi be a T \u00d7 V matrix for subject i = 1, \u2026, I. In the context of fMRI, T indicates scans while V indicates voxels. Assume the number of ICs is Q. The ICA model specifies Xi = AiS, where Ai is a T \u00d7 Q mixing matrix and S is a Q \u00d7 V matrix of ICs. By assuming common spatial maps across subjects, we can stack the individual matrices in the temporal domain. Let TI \u00d7 V group data matrix, and TI \u00d7 Q group mixing matrix. Spatial group ICA simply specifies the standard modelIndependent component analysis models collected signals as linear weighted combinations of independent sources. Notationally, let X is element of X and define X as row t of X and X as column v. Then, model (1) could be rewritten as We use parentheses to index matrices so that E[X] = \u03bcx = 0 and hence E[S] = \u03bcs = 0. If this assumption were not made, the ICA model would imply X \u2212 \u03bcx = A(S \u2212 \u03bcS), which is exactly an ICA model with a centered data matrix and the ICs having mean 0. Hence, X is demeaned prior to analyses and \u03bcS is assumed to be zero. Similarly, since AS = {A\u2215c} * {cS}, ICs are only identified up to scalar multiplication. Thus, we assume that Var{S} = 1 for q = 1, \u2026, Q and v = 1, \u2026, V.We assume that S \u2aeb S when q \u2260 q\u2032, where \u2aeb implies statistical independence. However, standard variations of ICA also assumes that i.i.d collection, which we also adopt for now. The i.i.d assumption will be relaxed later in the next subsection. As a consequence of these assumptions, X \u2aeb X when v \u2260 v\u2032; yet note that X is not (necessarily) independent of X.ICA gets its name by assuming that Q < TI and Equation (1) is overdetermined. A two-stage dimension reduction is often performed to reduce the computational load and avoid overfitting for v = 1, \u2026, V, and f = , then standard multivariate random variable transformation results imply that the joint density of X isICA estimates n et al. , if fq iv = 1, \u2026, V istherefore the joint log-likelihood including all contributions for fq and B simultaneously. Instead, an iterative optimization is often performed. Specifically, given the current estimate of B at iteration k, say S via \u015ck), density estimation techniques can be used to obtain k.It is generally not possible to solve the joint likelihood for the parameters in ck) < ck) < \u2026 < ck) be equidistant histogram cutpoints, where ck) = \u2212\u03f5 + min \u015ck) and ck) = \u03f5 + max \u015ck). The number \u03f5 is added to avoid numerical boundary effects. Let j = 1, \u2026, J, be the count of values between cutpoints j \u2212 1 and j for row q of \u015ck), ck)] by mk) for j = 1, \u2026, J. We obtain a density estimate via the log-linear model nk) ~ Poisson{\u03bbk)}, where Dk) is a vector of coefficients. To avoid overfitting the B-spline model, and to avoid sensitivity to the degrees of freedom, we choose a large value for the degrees of freedom and put a squared penalty on the coefficients. Let \u03bck) denote the expectation of nk), then the penalized log likelihood takes the form = \u03b2 \u2212 2\u03b2 + \u03b2. The resulting model is then a generalized linear mixed model on the counts. The B-spline basis is evaluated at the midpoint of the cutpoint interval. However, via interpolation, the smoother gives an estimate for all values, thus yielding a continuous function, say where \u03b4 is a parameter controlling the smoothness of the fit, \u0394 denotes the difference operator, \u0394Using generalized linear mixed models to penalize smoothing has become standard practice and is well described in Ruppert et al. . HistogrB) are available in closed form, making gradient- and Hessian-based optimization algorithms easy to implement. This is useful for the stage of the algorithm for obtaining the next iterate of B. Accordingly, we use a Newton-Raphson method to update the mixing matrix. Specifically, let kth iteration, we update B byFurthermore, due to the convenient differentiation properties of B-spline bases and the simple exponential (Poisson) model, the first and second derivatives of B should satisfy the condition that the underlying ICs are the same for all subjects. Following Eloyan et al. ((Bk+1)\u22121. We use the Amari metric between Bk+1)( and Bk) assume that n et al. considerI = 18 ROIs for the whole brain. We assume the signals are i.i.d within region but could be differently distributed across region. Under this assumption, the density function fq can be written as the sum of the region-specific density function, that is,Specifically, we propose to account for the difference in the activity across the brain by allowing different density distribution in different regions. To this end, we adopt the functional parcellation of the brain activity map proposed by Yeo et al. . The parRi denotes the ith ROI, fiq is the density function on Ri. Thus, fiq takes positive values on the ith region and zero elsewhere. The density estimate of fiq can be obtained using the same procedure as proposed in Section 2.2, confined to the ith region. The estimate for fq can be constructed by taking the sum of where The proposed ICA algorithm can be summarized as follows:_______________________________________________________________________________________________________B.Choose an initial value of for the mixing matrix B using the Amari metric.S = BX.Let q, calculate the density function fiq(s) on the ith ROI, i = 1, 2, \u2026, I, using the p-spline based density estimation algorithm.For each IC Get B using the Newton-Raphson method, see Equation (2).Update the mixing matrix Alternate until convergence of _______________________________________________________________________________________________________fq1 = fq2 = \u2026 = fIq, the above algorithm reduces to the algorithm proposed in Section 2.2 assuming i.i.d signals across the entire brain.Note that, in the special case that We conduct simulation studies to evaluate the performance of the proposed ICA algorithm. We consider four settings where data are generated using different distributions. We compare the results of the proposed algorithm with fastICA perform much faster than the likelihood-based algorithms . ~ Weibull, S ~ Gamma, and S ~ Gamma, respectively. Standard Gaussian noises are added to the generated ICs. The mixing matrix is given byIn the first set of simulation studies, we assume there are Figure Q = 2, and we generate the signals based on parcellation. Specifically, we partition the real line into 10 intervals, with cutoffs at the 10th, 20th, 30th, 40th, 50th, 60th, 70th, 80th, and 90th percentiles of the normal distribution. For the first IC, the density function is uniformly distributed within each interval, but the overall shape is approximately normal. For the second IC, the density function follows Laplace distribution within each interval, and the overall shape is approximately normal. The mixing matrix is given byIn the second setting, we assume the number of source signals The boxplots of the spatial correlation and the Amari errors based on 200 replications are summarized in Figure I = 3. The source signals are the same as those in the second setting, and the mixing matrices for the three subjects are given byIn the third setting, we generate multi-subject data with number of subject The simulation results are summarized in Figure In the fourth setting, we generate the ICs and mixing matrices by mimicking signals from real fMRI data. Specifically, we run fastICA on 10 subjects from the NITRC 1000 Connectome dataset to get twenty ICs (networks). Three of the twenty networks are chosen as the true signals, and they are shown in Figure The results indicate that our proposed p-spline based ICA algorithm is successful in recovering signals from real fMRI data.https://www.nitrc.org/projects/fcon_1000/).We apply our proposed algorithm to the 1000 Functional Connectomes Project dataset, which consists of thousands of resting state scans combined across multiple sites with the goal of facilitating discovery and analysis of brain networks , and rerun the ICA algorithm on the dimension reduced dataset. We set R = 15, 20, 30 and Q = 15, 20, 30, respectively. Similarly as in Li et al. , auditory network (0.73), DMN (0.86) and control network (0.84). In addition, the correlations for the major brain networks using R = 20, Q = 30, and R = 20, Q = 20 are as follows: visual network (0.78), auditory network (0.61), DMN (0.88) and control network (0.69). In summary, we find that, although the estimation results depend on the number of components, the major networks appear to be robust against the choices of number of components.As suggested by an anonymous reviewer, we investigate the impact of the dimension of the reduced space on the final results. Specifically, we select different values of i et al. , we findIndependent component analysis is a factor-analytic approach that is commonly used in analyzing fMRI data. In this manuscript, we present a novel and simple ICA algorithm that is fast, likelihood based and straightforward to program. The algorithm is nonparametric, data-driven, and is blind to the particular distribution of the underlying signals. As a byproduct of the algorithm, we obtain the likelihood function of the ICA model which can be used for further statistical inference. It should be noted that, the likelihood function in our algorithm is a profile likelihood, since we are mainly interested in the mixing matrix estimates and the parameters over the spline basis are nuisance parameters. Indeed, one could also study the coefficients on the spline basis in a full likelihood, but this is not the goal of this manuscript, hence the variance of the estimator of the mixing matrix depends on the variance of the nuisance parameters.The proposed algorithm is extended to allow for region specific IC density functions, on the rationale that most signals of interest are reasonably confined to a subset of the entire anatomical brain space (Guo and Pagnoni, Simulation studies show that our proposed algorithm works well in both the simple and complex situations, and it substantially outperforms the existing ICA algorithms when the identically distributed assumption of the source signals is violated. In applying the proposed algorithm to the fMRI data, we choose to account for the difference in brain activities across regions by using the brain parcellation proposed by Yeo et al. . Our datThere are a few directions for future research. Firstly, the test-retest reliability of the intrinsic brain networks is an important issue and has been studied extensively in recent years. For example, Zuo et al. found thThe authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "A combinatorial synthetic approach is described for the isolation of quaternary cocrystals. The strategy outlines chemical and geometrical modulations in the long-range synthon Aufbau modules (LSAMs) to systematically increase the number of components. A synthetic strategy is outlined whereby a binary cocrystal may be developed in turn into a ternary and finally into a quaternary cocrystal. The strategy hinges on the concept of the long-range synthon Aufbau module (LSAM) which is a large supramolecular synthon containing more than one type of intermolecular interaction. Modulation of these interactions may be possible with the use of additional molecular components so that higher level cocrystals are produced. We report six quaternary cocrystals here. All are obtained as nearly exclusive crystallization products when four appropriate solid compounds are taken together in solution for crystallization. This array is made up of closed TMP\u00b7ORC tetramers consists of discrete synthon B modules that are laterally offset with respect to one another so that there are no C\u2014H\u22ef\u03c0 interactions. Fig. 2et al., 2011This article argues for the synthetic design of complex quaternary solids by systematic selection and fabrication from LSAMs in binary and ternary cocrystals. Fig. 1A , 1,10-phenanthroline (PHEN), 2,2-bisthiophene (22TP), hexamethylbenzene (HMB) and pyrene (PYR) results in stoichiometric ternary cocrystals. The structures of three of them are along predicted lines and they may be considered as being obtained by substitution of the \u2018free\u2019 TMP molecule in Form I of the ORC\u00b7TMP binary with the new aromatic compound Fig. 3. The othA Fig. 4 and the B is constructed exclusively with PHE and the TMP molecules provide cross links via the third \u2018hook\u2019 hydroxy group of the PGL molecule. From MeCN also but under different conditions, we obtained the 2:1:2 Form II which is reminiscent of the ORC\u00b7TMP\u00b7PHE ternary except that PHE is in the \u2018inner\u2019 part of synthon A rather than in the \u2018outer\u2019 part. There was no contamination of either of these ternaries by the \u2018other\u2019 ternary in the crystallization experiments. A priori, it would not be possible to predict which structure one would obtain from MeCN under what conditions. What is important, however, is that there are a number of topologically similar crystal structures available to the system. Which one is actually obtained would seem to depend on the exact experimental conditions used. The system lends itself to high throughput methods. We maintain that we carried out a very large number of crystallization experiments on an entire array of compounds and solvent systems in a combinatorial manner. It is of interest to note that such examples of (pseudo)polymorphism are very rare in three component systems . Let us consider these structures in turn. The 2:1:2:1 ORC\u00b7TMP\u00b7PHE\u00b7HMB structure follows smoothly from the ternary 2:1:2 ORC\u00b7TMP\u00b7PHE in a chemically reasonable manner. In the ternary, one observes \u03c0\u22ef\u03c0 stacking between electron-deficient PHE molecules . In the quaternary, HMB inserts in a classical donor\u2013acceptor fashion (\u223c\u20053.56\u2005\u00c5). Replacement of the electron-rich HMB by PYR and the ditopic PHE by ACR achieves the same result and one obtains the stoichiometric quaternaries 2:1:2:1 ORC\u00b7TMP\u00b7PHE\u00b7PYR and 2:1:2:1 ORC\u00b7TMP\u00b7ACR\u00b7PYR (see section S3). Coming next to PGL, it is not difficult to understand the crystal structure of the quaternary 2:1:1:1 PGL\u00b7TMP.PHE.DPE wherein an infinite synthon A based structure with two ditopic heterocycles is cross linked with DPE. The quaternaries 2:2:1:1 PGL.TMP.PHE.ANT and 2:2:1:1 PGL.TMP.PHE.PYR have very similar structures. Synthon A is constructed with TMP in the \u2018outer\u2019 locations and PHE in the \u2018inner\u2019 location. ANT and PYR intercalate with C\u2014H\u22ef\u03c0 interactions to give a columnar LSAM.In practice, a total of six quaternaries were obtained, three each from ORC and PGL may be found in a ternary cocrystal between say, A, B and C. Similarly, a synthon that is virtual in a three-component system may be seen in a four-component cocrystal. This also implies that proof correction mechanisms exist in the crystallizations, perhaps leading to the specificity of outcome.The results obtained in this work validate the idea of using a supramolecular combinatorial library in the isolation of stoichiometric three- and four-component molecular crystals. We have earlier used this concept to make a single ternary cocrystal global, BS_ORC_TMP, BS_ORC_TMP_Hydrate, BS_PGL_TMP.cif, QS_ORC_TMP_ACR_PYR, QS_ORC_TMP_PHE_HMB, QS_ORC_TMP_PHE_PYR, QS_PGL_TMP_PHE_ANT, QS_PGL_TMP_PHE_DPE, QS_PGL_TMP_PHE_PYR_hydrate, TS_ORC_TMP_22TP, TS_ORC_TMP_ACR, TS_ORC_TMP_HMB, TS_ORC_TMP_PHE, TS_ORC_TMP_PHEN, TS_ORC_TMP_PYR, TS_PGL_TMP_DPE, TS_PGL_TMP_PHE_Form_I, TS_PGL_TMP_PHE_Form_II, TS_PGL_TMP_PYR. DOI: Click here for additional data file.10.1107/S2052252515023957/hi5641sup3.zipCIF files. DOI: 10.1107/S2052252515023957/hi5641sup2.pdfSupporting information. DOI: 1428091, 1428092, 1428093, 1428106, 1428104, 1428105, 1428107, 1428090, 1428108, 1428099, 1428095, 1428098, 1428094, 1428096, 1428097, 1428103, 1428100, 1428101, 1428102CCDC references:"} +{"text": "Supporting Information File Rodrigues_etal_Supporting Information S1 with Track Changes.doc should not appear with the published article. Please view the correct Rodrigues_etal_Supporting Information S1 here.S1 FileSupporting Materials and Methods.(DOC)Click here for additional data file."} +{"text": "A new conceptual data model that addresses the geometric dimensioning and tolerancing concepts of datum systems, datums, datum features, datum targets, and the relationships among these concepts, is presented. Additionally, a portion of a related data model, Part 47 of STEP (ISO 10303-47), is reviewed and a comparison is made between it and the new conceptual data model. NOTE\u2014Though the scope of the DSCDM is limited to the concepts mentioned above, the aim is to provide a foundation upon which more comprehensive GD&T data models may be based.Datum is an entity name and datum refers to the object). Attribute names are printed in italic type . Additionally, permissible values from enumerated data types are printed in all uppercase letters .NOTE\u2014The following conventions are employed throughout the course of this paper. To distinguish between EXPRESS entities and the objects they represent, entity names are printed in bold type and the objects they represent are printed in non-bold type. Furthermore, entity names start with a leading uppercase letter requirements have been exchanged with technical drawings. However, with the advent of computer-aided design, manufacturing, and inspection equipment, the ability to exchange these requirements in a computer-sensible manner has become increasingly more desirable. As \u201ca data model is an effective technique to define the shareable semantics that are essential to the success of data communication in an integrated environment\u201d , a conceElectronic assembly, interconnect and packaging design [\u201cThe first step in data modeling is to define the data requirements\u201d . In regag design . With thMost of the following definitions are from existing drawing-based GD&T standards and associated reference books. These definitions are important, because they explain some of the concepts that are at the foundation of these GD&T standards, and consequently form the basis for the requirements of the DSCDM.Datum: \u201cA theoretically exact point, axis, or plane derived from the true geometric counterpart of a specified datum feature. A datum is the origin from which the location or geometric characteristics of features of a part are established\u201d .Datum Feature: \u201cAn actual feature of a part that is used to establish a datum\u201d .Datum Feature Symbol: \u201cThe symbolic means of indicating a datum feature consists of a capital letter enclosed in a square frame and a leader line extending from the frame to the concerned feature, terminating with a triangle\u201d .Datum System: \u201cA group of two or more separate datums used as a combined reference for a toleranced feature\u201d .Datum Reference Frame: A framework that consists of three mutually perpendicular datum planes, three datum axes (located at the intersection of each pair of datum planes), and a datum point (that is located at the intersection of the three datum planes).Datum Target: \u201cA specified point, line, or area on a part used to establish a datum\u201d .Datum Target Frame: \u201cThe datum targets are indicated by a circular frame divided in two compartments by a horizontal line. The lower compartment is reserved for a letter and a digit. The letter represents the datum feature and the digit the datum target number. The upper compartment is reserved for additional information, such as dimensions of the target area. If there is not sufficient space within the compartment, the information may be placed outside and connected to the appropriate compartment by a leader line\u201d .Feature: \u201cThe general term applied to a physical portion of a part, such as a surface, pin, tab, hole, or slot\u201d . [Feature Control Frame: \u201cThe feature control frame is a rectangular box containing the geometric characteristic symbol and the form, orientation, profile, runout, or location tolerance. If necessary, datum references and modifiers applicable to the feature or the datums are also contained in the box, e.g.\u201d .Feature of Size: \u201cOne cylindrical or spherical surface, or a set of two opposed elements or opposed parallel surfaces, associated with a size dimension\u201d .Least Material Condition (LMC): \u201cThe condition in which a feature of size contains the least amount of material within the stated limits of size\u2014for example, maximum hole diameter, minimum shaft diameter\u201d .least material requirement permits an increase in the stated geometrical tolerance when the concerned feature departs from its least material condition (LMC)\u201d [Least Material Requirement: \u201cThe n (LMC)\u201d .Maximum Material Condition (MMC): \u201cThe condition in which a feature of size contains the maximum amount of material within the stated limits of size\u2014for example, minimum hole diameter, maximum shaft diameter\u201d .maximum material principle is a tolerancing principle which requires that the virtual condition for the toleranced feature(s) and, if indicated, the maximum material condition of perfect form for datum feature(s), shall not be violated\u201d [Maximum Material Principle: \u201cThe iolated\u201d .Regardless of Feature Size (RFS): \u201cThe term used to indicate that a geometric tolerance or datum reference applies at any increment of size of the feature within its size tolerance\u201d .Shape_aspect, Shape_aspect_relationship, Property_definition, and Property_definition_relationship. A review of these entities is presented in STEP integrated generic resources are a series of STEP parts that define resource constructs that are context-independent. The underlying structure of the DSCDM is based on four entities from the STEP integrated generic resources. These entities are Shape_aspect entity of STEP Part 41 [Shape_aspect based entities are the entities based on the Shape_aspect_relationship entity of STEP Part 41. At the bottom of the page are the entities based on the Property_definition entity of STEP Part 41. Note that the DSCDM does not actually contain entities based on the Property_definition_relationship entity of STEP Part 45 [Property_definition based entities are related with attributes that have been included in the Property_definition based entities. For example, instead of specifying a Property_definition_relationship based entity in the DSCDM to relate the Datum_system_definition entity with the Datum_precedence_assignment entity, the relationship between these two entities is established by the assigned_datum_precedences attribute of the Datum_system_definition entity.Property_definition_relationship based entities exist in the DSCDM, they exist in spirit wherever two Property_definition based entities are related.NOTE\u2014While no The DSCDM is presented in the EXPRESS-G diagram shown in Part 41 are at t Part 45 . InsteadShape_aspect based entities are first, followed by the Shape_aspect_relationship based entities, and finally, the Property_definition based entities.The definitions of the entities presented in Datum_system corresponds to a datum system (see Sec. 3 of this paper) that is comprised of one to three datums.Datum_system entity is based on the Shape_aspect entity of STEP Part 41 [NOTE\u2014The Part 41 .NOTE\u2014The definition of datum system as defined in ISO 5459-1981 is given in Sec. 3 of this paper. However, for the purpose of this model, the definition of datum system has been extended so that a datum system may be comprised of a single datum.A datum_usages: The datum_usages attribute specifies a set of one to three Datum_usage_in_datum_systems. Each of the Datum_usage_in_datum_systems in this set corresponds to the usage of a datum in the datum system.defining_definition: The defining_definition attribute specifies the Datum_system_definition that specifies the characteristics of the corresponding datum system .NOTE\u2014On technical drawings, the characteristics of a datum system are typically specified in a feature control frame.EXAMPLE\u2014Both Datum_features specified as the used_datum_feature by the Datum_feature_usage_in_datums that are specified as the datum_feature_usages by the Datums that are specified as the used_datum by the Datum_usage_in_datum_systems that are specified as the datum_usages of the Datum_system, no Datum_feature may be specified more than once.NOTE\u2014WR1 corresponds to the assertion that each datum feature shall not be used more than once in establishing any one datum system.WR1: Of the Datum_targets specified as the used_datum_target by the Datum_target_usage_in_datum_target_sets specified as the datum_target_usages by the Datum_target_sets specified as the used_datum_feature by the Datum_feature_usage_in_datums that are specified as the datum_feature_usages by the Datums that are specified as the used_datum by the Datum_usage_in_datum_systems that are specified as the datum_usages of the Datum_system, no Datum_target may be specified more than once.NOTE\u2014WR2 corresponds to the assertion that each datum target shall not be used more than once in establishing any one datum system.WR2: Of the Datum corresponds to a datum (see Sec. 3 of this paper). A Datum may be either a Simple_datum or a Common_datum.Datum entity is based on the Shape_aspect entity of STEP Part 41 [NOTE\u2014The Part 41 .A datum_feature_usages: The datum_feature_usages attribute specifies a set of zero or more Datum_feature_usage_in_datums. Each of the Datum_feature_usage_in_datums in this set corresponds to the usage of a datum feature in establishing the datum.Simple_datum is a type of Datum that corresponds to a datum that is established from exactly one datum feature.A Simple_datum shall be specified as the used_datum by at least one Datum_usage_in_datum_system.NOTE\u2014WR1 corresponds to the assertion that each simple datum shall be used in at least one datum system.WR1: Each Simple_datum shall specify exactly one Datum_feature_usage_in_simple_datum as its datum_feature_usages.NOTE\u2014WR2 corresponds to the assertion that each simple datum shall be established from exactly one datum feature.WR2: Each Common_datum is a type of Datum that corresponds to a datum that is established from more than one datum feature.NOTE\u2014On technical drawings, a datum that is established from multiple datum features is indicated by placing the identifying letters of the datum features, separated by a dash, within a single compartment in a feature control frame. There is no significance to the order of the datum feature identifying letters within a compartment of the feature control frame.EXAMPLE\u2014The technical drawing presented in A Common_datum shall be specified as the used_datum by at least one Datum_usage_in_datum_system.NOTE\u2014WR1 corresponds to the assertion that each common datum shall be used in at least one datum system.WR1: Each Common_datum shall specify more than one Datum_feature_usage_in_common_datum as its datum_feature_usages.NOTE\u2014WR2 corresponds to the assertion that each common datum shall be established from more than one datum feature.WR2: Each Datum_feature corresponds to a datum feature (see Sec. 3 of this paper). A Datum_feature may be a Datum_target_set.Datum_feature entity is based on the Shape_aspect entity of STEP Part 41 [NOTE\u2014The Part 41 ..NOTE\u2014On technical drawings, a feature is typically identified as a datum feature by means of a datum feature symbol, e.g., NOTE\u2014The concept of datum feature in the DSCDM applies to features that are used to establish one or more datums. Features that may be used as datum features include \u201cpartial\u201d features and datum target sets, as well as \u201ccomplete\u201d and composite features. The concept of datum feature in the DSCDM does not pertain to features in which only a portion of the feature is used to establish one or more datums. \u201cPartial\u201d and composite features are discussed in Sec. 10.2 of this paper.A identification: The identification attribute specifies the string value by which the corresponding datum feature is referred.).NOTE\u2014On technical drawings, each datum feature is referred to by an identifying letter, .WR1: There shall be at most one Datum_target_set is a type of Datum_feature that corresponds to a set of one or more datum targets (see Sec. 3 of this paper).) indicate in which datum target sets the associated datum targets are used.EXAMPLE\u2014There are three datum target sets shown in the technical drawing presented in A datum_target_usages: The datum_target_usages attribute specifies a set of one or more Datum_target_usage_in_datum_target_sets. Each of the Datum_target_usage_in_datum_target_sets in this set corresponds to the usage of a datum target in the datum target set.Datum_target corresponds to a datum target (see Sec. 3 of this paper).Datum_target entity is based on the Shape_aspect entity of STEP Part 41 [NOTE\u2014The Part 41 .NOTE\u2014Datum targets are typically used in situations where it is inappropriate to specify an entire surface as a datum feature.EXAMPLE\u2014There are six datum targets shown in A datum_target_usages: The datum_target_usages attribute specifies a set of one or more Datum_target_usage_in_datum_target_sets. Each of the Datum_target_usage_in_datum_target_sets in this set corresponds to the usage of the datum target in a datum target set.Datum_usage_in_datum_system corresponds to the usage of a datum in a datum system.Datum_usage_in_datum_system entity is based on the Shape_aspect_relationship entity of STEP Part 41 [NOTE\u2014The Part 41 .A comprised_datum_system: The comprised_datum_system attribute specifies the Datum_system that corresponds to the datum system that is either partially or wholly comprised of the corresponding datum.used_datum: The used_datum attribute specifies the Datum that corresponds to the datum that is used in the corresponding datum system.precedence_assignment: The precedence_assignment attribute specifies the Datum_precedence_assignment that corresponds to the specification of the order in which the datum is established within the datum system.comprised_datum_system and used_datum shall be unique within a population of Datum_usage_in_datum_system.NOTE\u2014UR1 corresponds to the assertion that each datum shall not be used more than once in any one datum system.UR1: The combination of Datum specified as the used_datum shall either be a Common_datum or Simple_datum.NOTE\u2013WR1 corresponds to the assertion that each datum that is used in a datum system shall be established from one or more datum features.WR1: The Datum_feature_usage_in_datum_system corresponds to the usage of a datum feature in establishing a datum system.Datum_feature_usage_in_datum_system entity is based on the Shape_aspect_relationship entity of STEP Part 41 [NOTE\u2014The Part 41 .Datum_feature and a Datum_system is indirectly established with a Datum_feature_usage_in_datum, a Datum, and a Datum_usage_in_datum_system. Therefore, a Datum_feature_usage_in_datum_system should not be used unless it is necessary to indicate the application of either the least material requirement or the maximum material principle (see Sec. 3 of this paper) to a datum feature within the context of a datum system. In essence, a Datum_feature_usage_in_datum_system corresponds to a datum feature in the context of a datum system.NOTE\u2014The relationship between a A established_datum_system: The established_datum_system attribute specifies the Datum_system that corresponds to the datum system that is established from the corresponding datum feature.used_datum_feature: The used_datum_feature attribute specifies the Datum_feature that corresponds to the datum feature that is used to establish the corresponding datum system.applied_material_condition_property: The applied_material_condition_property attribute specifies the Datum_feature_material_condition_property that corresponds to the specification of a material condition property that is applied to the datum feature in the context of the datum system.Datum_feature specified as the used_datum_feature shall be specified as the used_datum_feature by a Datum_feature_usage_in_datum that specifies a Datum as the established_datum, and that Datum shall be specified as the used_datum by a Datum_usage_in_datum_system that specifies the same Datum_system as the comprised_datum_system, as is specified as the established_datum_system by the Datum_feature_usage_in_datum_system.NOTE\u2014WR1 corresponds to the assertion that the datum feature shall be used to establish a datum that is used in the datum system.WR1: The Datum_feature_usage_in_datum corresponds to the usage of a datum feature in establishing a datum. A Datum_feature_usage_in_datum is either a Datum_feature_usage_in_simple_datum or a Datum_feature_usage_in_common_datum.Datum_feature_usage_in_datum entity is based on the Shape_aspect_relationship entity of STEP Part 41 [NOTE\u2014The Part 41 .A established_datum: The established_datum attribute specifies the Datum that corresponds to the datum that is established from the corresponding datum feature.used_datum_feature: The used_datum_feature attribute specifies the Datum_feature that corresponds to the datum feature that is used to establish the corresponding datum.Datum_feature_usage_in_simple_datum is a type of Datum_feature_usage_in_datum that corresponds to the usage of a datum feature in establishing a datum that is established from exactly one datum feature.A established_datum: The established_datum attribute specifies the Simple_datum that corresponds to the datum that is established from the corresponding datum feature.Datum_feature specified by the inherited used_datum_feature attribute.NOTE\u2014\u201cThe corresponding datum feature\u201d refers to the datum feature that corresponds to the Datum_feature_usage_in_common_datum is a type of Datum_feature_usage_in_datum that corresponds to the usage of a datum feature in establishing a datum that is established from more than one datum feature.A established_datum: The established_datum attribute specifies the Common_datum that corresponds to the datum that is established, in part, from the corresponding datum feature.Datum_feature specified by the inherited used_datum_feature attribute.NOTE\u2014\u201cThe corresponding datum feature\u201d refers to the datum feature that corresponds to the Datum_target_usage_in_datum_target_set corresponds to the usage of a datum target in a set of datum targets.Datum_target_usage_in_datum_target_set entity is based on the Shape_aspect_relationship entity of STEP Part 41 [NOTE\u2014The Part 41 . symbol in NOTE\u2014On technical drawings, the usage of a datum target in a datum target set is indicated with a datum target frame, e.g., the datum target frame in EXAMPLE\u2014The A comprised_datum_target_set: The comprised_datum_target_set attribute specifies the Datum_target_set that corresponds to the datum target set that is either partially or wholly comprised of the corresponding datum target.datum_target_number: The datum_target_number attribute specifies the integer value by which the corresponding datum target is identified within the corresponding datum target set.NOTE\u2014Datum target numbers are described in 7.1.1 of ISO 5459 . datum target frame of EXAMPLE\u2014The datum target number \u201c1\u201d in the used_datum_target: The used_datum_target attribute specifies the Datum_target that corresponds to a datum target that is used in the corresponding datum target set.used_datum_target and defined_datum_target_set shall be unique within a population of Datum_target_usage_in_datum_target_set.NOTE\u2014UR1 corresponds to the assertion that each datum target shall not be used in any one datum target set more than once.UR1: The combination of datum_target_number and defined_datum_target_set shall be unique within a population of Datum_target_usage_in_datum_target_set.NOTE\u2014UR2 corresponds to the assertion that within a datum target set each datum target shall be identified by a unique datum target number.UR2: The combination of Datum_system_definition corresponds to the specification of the characteristics of a datum system. These characteristics include the order in which the datums are established within the datum system and any material condition properties that are explicitly applied to datum features within the context of the datum system. A Datum_system_definition shall either be a Datum_system_definition_with_material_conditions or a Datum_system_definition_without_material_conditions.Datum_system_definition entity is based on the Property_definition entity of STEP Part 41 [NOTE\u2014The Part 41 .NOTE\u2014On technical drawings, the characteristics of a datum system are typically specified in a feature control frame.A defined_datum_system: The defined_datum_system attribute specifies the Datum_system that corresponds to the datum system the characteristics of which are specified.assigned_datum_precedences: The assigned_datum_precedences attribute specifies a set of one to three Datum_precedence_assignments. Each of the Datum_precedence_assignments in this set corresponds to the specification of the order in which a datum is established within the datum system.Datum_precedence_assignment within the set of Datum_precedence_assignments specified as the assigned_datum_precedences shall specify as its assigned_to a Datum_usage_in_datum_system that specifies as its comprised_datum_system the same Datum_system as specified as the defined_datum_system.NOTE\u2014WR1 corresponds to the assertion that each datum system specification shall only specify the precedence of datums used in the datum system that the specification characterizes.WR1: Each Datum_precedence_assignment that has a name of TERTIARY shall not exist within the set of Datum_precedence_assignments specified as the assigned_datum_precedences unless a Datum_precedence_assignment exists within that set that has a name of SECONDARY.NOTE\u2014WR2 corresponds to the assertion that each datum system specification that specifies a tertiary datum shall also specify a secondary datum.WR2: A Datum_precedence_assignment that has a name of SECONDARY shall not exist within the set of Datum_precedence_assignments specified as the assigned_datum_precedences unless a Datum_precedence_assignment exists within that set that has a name of PRIMARY.NOTE\u2014WR3 corresponds to the assertion that each datum system specification that specifies a secondary datum shall also specify a primary datum.WR3: A Datum_system_definition shall be specified as the referenced_datum_system_definition by at least one Geometric_tolerance_with_specified_datum_systrem or Dimension_with_specified_datum_system.NOTE\u2014WR4 corresponds to the assertion that each datum system specification shall be referenced by at least one geometric tolerance or dimension.WR4: Each Datum_system_definition_with_material_conditions is a type of Datum_system_definition that corresponds to a specification of a datum system that specifies the application of material condition properties to one or more datum features within the context of the datum system.Datum_system_definition_with_material_conditions is specified in a feature control frame that contains either at least one least material requirement symbol that is preceded immediately by a datum feature letter or at least one maximum material principle symbol that is preceded immediately by a datum feature letter, e.g., .NOTE\u2014On technical drawings, a datum system specification that corresponds to a A applied_material_condition_properties: The applied_material_condition_properties attribute specifies a set of one or more Datum_feature_material_condition_propertys. Each of the Datum_feature_material_condition_propertys in this set corresponds to the specification of a material condition property that is explicitly applied to a datum feature within the context of the datum system.Datum_feature_material_condition_property within the set of Datum_feature_material_condition_propertys specified as the applied_material_condition_properties shall specify as its applied_to a Datum_feature_usage_in_datum_system that specifies as its established_datum_system the same Datum_system as specified as the defined_datum_system.NOTE\u2014WR1 corresponds to the assertion that each datum system specification shall only specify material condition properties for datum features used to establish the datum system that the specification characterizes.defined_datum_system attribute referred to in WR1 is inherited from the Datum_system_definition entity of which this entity is a subtype.NOTE\u2014The WR1: Each Datum_system_definition_without_material_conditions is a type of Datum_system_definition that corresponds to a specification of a datum system in which no material condition properties are specified.Datum_system_definition_without_material_conditions is typically specified in a feature control frame that contains neither a least material requirement symbol that is immediately preceded by a datum feature letter nor a maximum material principle symbol that is immediately preceded by a datum feature letter, e.g., .NOTE\u2014In technical drawings, a datum system specification that corresponds to a Datum_system_definition_without_material_conditions could also be specified in a dimension related note; see NOTE\u2014On technical drawings, a datum system specification that corresponds to a A Datum_precedence_assignment corresponds to the specification of the order in which a datum is established within a datum system.Datum_precedence_assignment entity is based on the Property_definition entity of STEP Part 41 [NOTE\u2014The Part 41 .NOTE\u2014On technical drawings, the precedence of a datum within a datum system is typically specified in a feature control frame. The location of the compartment containing the letter(s) corresponding to the datum feature(s) from which the datum is established indicates the assigned precedence. The compartment for the primary datum (if it exists) is immediately to the right of the compartment containing the tolerance value. The compartment for the secondary datum (if it exists) is immediately to the right of the compartment for the primary datum. Lastly, the compartment for the tertiary datum (if it exists) is immediately to the right of the compartment for the secondary datum.EXAMPLE\u2014A assigned_to: The assigned_to attribute specifies a Datum_usage_in_datum_system. In essence, the Datum_usage_in_datum_system corresponds to the datum within the context of the datum system to which the datum precedence is assigned.NOTE\u2014A datum within the context of one datum system may be assigned one precedence, e.g., primary, and the same datum within the context of another datum system may be assigned another precedence, e.g., secondary.EXAMPLE\u2014In name: The name attribute specifies the value of the assigned datum precedence. Valid values for the name are PRIMARY, SECONDARY, and TERTIARY.associate_datum_system_definition: The associate_datum_system_definition attribute specifies the Datum_system_definition that corresponds to the datum system specification to which the datum precedence is associated.name and associate_datum_system_definition shall be unique within a population of Datum_precedence_assignments.NOTE\u2014UR1 corresponds to the assertion that no two datums of a datum system shall have the same precedence.UR1: The combination of Datum_feature_material_condition_property corresponds to the specification of a material condition property that is explicitly applied to a datum feature within the context of a datum system.Datum_feature_material_condition_property entity is based on the Property_definition entity of STEP Part 41 [NOTE\u2014The Part 41 .A applied_to: The applied_to attribute specifies a Datum_feature_usage_in_datum_system. In essence, the Datum_feature_usage_in_datum_system corresponds to the datum feature within the context of the datum system to which the material condition property is applied.least material requirement, and the same datum feature within the context of another datum system may have another material condition property applied, e.g., maximum material principle.NOTE\u2014A datum feature within the context of one datum system may have one material condition property applied, e.g., name: The name attribute specifies the value by which the material condition property is known. Valid values for the name are LEAST_MATERIAL_REQUIREMENT and MAXIMUM_MATERIAL_PRINCIPLE (see Sec. 3 of this paper).Datum_feature_material_condition_property that has a name of LEAST_MATERIAL_REQUIREMENT corresponds to a datum feature letter followed by the symbol in a feature control frame of a technical drawing, e.g., .NOTE\u2014A Datum_feature_material_condition_property that has a name of MAXIMUM_MATERIAL_PRINCIPLE corresponds to a datum feature letter followed by the symbol in a feature control frame of a technical drawing, e.g., .NOTE\u2014A regardless of feature size principle (see Sec. 3 of this paper) shall be in effect in cases where the datum feature is a feature of size (see Sec. 3 of this paper) and a Datum_feature_material_condition_property is not specified.NOTE\u2014It shall be understood that the associate_datum_system_definition: The associate_datum_system_definition attribute specifies the Datum_system_definition_with_material_conditions that corresponds to the datum system specification to which the material condition property is associated.Geometric_tolerance_with_specified_datum_system entity is not completely defined here, as it is not within the scope of this paper. However, the referenced_datum_system_definition attribute of this entity is defined to illustrate how the DSCDM could be tied into a larger GD&T data model.Geometric_tolerance_with_specified_datum_system entity is based on the Property_definition entity of STEP Part 41 [NOTE\u2014The Part 41 .The referenced_datum_system_definition: The referenced_datum_system_definition attribute specifies the Datum_system_definition that corresponds to the datum system specification that is referenced by the geometric tolerance.Dimension_with_specified_datum_system entity is not completely defined here, as it is not within the scope of this paper. However, the referenced_datum_system_definition attribute of this entity is defined to illustrate how the DSCDM could be tied into a larger GD&T data model.Dimension_with_specified_datum_system entity is based on the Property_definition entity of STEP Part 41 [NOTE\u2014The Part 41 .Datum_system_definition entity is associated almost exclusively with geometric tolerances, clause 4.4 of ASME Y 14.5M [Dimension_with_specified_datum_system entity is shown in NOTE\u2014While the data modeled with the EXAMPLE\u2014The three linear dimensions presented in The referenced_datum_system_definition: The referenced_datum_system_definition attribute specifies the Datum_system_definition_without_material_conditions that corresponds to the datum system specification that is referenced by the dimension.Datum_target, Datum_feature, and Datum entities . Additionally, the EXPRESS-G diagram does not show that the Datum_target, Datum_feature, and Datum entities are subtypes of the Shape_aspect entity of STEP Part 41 [A pseudo EXPRESS-G diagram of the datum system related portion of the Part 47 model is presented in Part 41 .NOTE\u2014The actual EXPRESS declarations of these entities have not been included in the definitions given in Tables 1NOTE\u2014The clause and figure numbers specified within Tables 1Geometric_tolerance entity of which the Geometric_tolerance_with_datum_reference entity is a subtype is not shown here, as it is not within the scope of this paper.NOTE\u2014The definition of the The definitions of the entities shown in Tables 1NOTE\u2014The STEP architecture is such that STEP application protocols may specialize entities from the STEP integrated generic resources. However, deficiencies in entities of the type mentioned above will only be passed on to the STEP application protocols that incorporate them.Datum_feature entity from STEP Part 47 will not be able to support multiple use datum features (see Sec. 6.3).EXAMPLE\u2014A STEP application protocol that incorporates the This section discusses the differences between the DSCDM and portions of the model presented in STEP Part 47 that areDatum_system and Datum_system_definition entities of the DSCDM. Two independent comments were submitted against the STEP Part 47 DIS [Datum_reference only made sense in the context of a datum system. Concurring with those comments, the Datum_system and Datum_system_definition entities were incorporated within the DSCDM.One of the main differences between the Part 47 model and the DSCDM is that the Part 47 model has no entities that are equivalent to the t 47 DIS documentidentification attribute is on the Datum_feature entity; in contrast, in the Part 47 model the identification attribute is on the Datum entity. In practice, it is the datum feature to which an identifier is assigned. ASME Y14.5M [identification attribute may seem moot, because if the identification attribute is placed on the Datum entity, the name of the associated datum feature could easily be derived. However, in cases in which a datum is established from more than one datum feature, the Part 47 model produces ambiguous results because it is impossible to determine the name of the datum features from the value of the identification attribute on a Datum. The DSCDM does not have this ambiguity, as the identification attribute is on the Datum_feature entity. feature control frame of the position tolerance in identification attribute of the Datum entity would have a value of \u201cA\u2013B\u201d. However, it would be unclear as to which datum feature is identified as A and which datum feature is identified as B.EXAMPLE\u2014The Datum and Datum_feature entities in the Part 47 model are subtypes of the Shape_aspect entity of STEP Part 41 [name attribute. However, as a Datum_feature corresponds to an actual feature of a part it is likely that the name attribute will not be available for the datum feature identifying letter because it will likely be used for another purpose . Furthermore, as datums are identified solely for GD&T purposes it is likely that the inherited name attribute on the Datum entity would be available, thereby making the identification attribute on the Datum entity not only misplaced but redundant.NOTE\u2014As the Part 41 , they boIn the DSCDM the feature_basis_relationship attribute and WR1 on the Datum_feature entity specify that a Datum_feature shall be related to exactly one Datum. On the other hand, in the DSCDM the inverse datum_feature_usages attribute on the Datum_feature entity constrains the number of Datums that shall be established from a Datum_feature to one or more.). Also, , and ). Furthermore, both datum features A and B are used once again to establish the primary datum (yet another center axis) of the datum system specified by the position tolerance . As the Part 47 model limits the number of datums that may be established from a datum feature to one, this situation cannot be represented with the Part 47 model.EXAMPLE\u2014The technical drawing presented in The Part 47 model fails to account for the fact that a datum feature may be used to establish multiple datums, whereas the DSCDM does account for this fact. In the Part 47 model, the target_basis_relationship attribute and WR1 on the Datum_target entity specify that a Datum_target shall be related to exactly one Datum. On the other hand, the DSCDM does not specify a direct relationship between a Datum_target and a Datum. Instead, in the DSCDM the relationship between a Datum_target and a Datum is specified indirectly via the Datum_target_set entity and the two relationship entities Datum_target_usage_in_datum_target_set and Datum_feature_usage_in_datum. In the DSCDM the constraints on Datum_target and Datum_feature correspond to the assertion that a datum target shall be used in at least one datum target set and because a datum target set is a type of datum feature, the datum target set shall be used to establish at least one datum. The technical drawing presented in and datum target frames is associated with two datum target sets, F and G, each of which is used to establish a separate datum. As the Part 47 model limits the number of datums that may be established from a datum target to one, this situation cannot be represented with the Part 47 model.EXAMPLE\u2014The datum target point in The Part 47 model fails to account for the fact that a datum target may be used to establish multiple datums, whereas the DSCDM does account for this fact. In the Part 47 model the Datum_target_set is a type of Datum_feature. The Part 47 model has no entity that is equivalent to the Datum_target_set entity. Furthermore, the Part 47 model prevents the Datum_feature entity or a specialization of it to serve as a set of datum targets. That is, in the Part 47 model the attributes target_basis_relationship and feature_basis_relationship on the Datum_target and Datum_feature entities, respectively, in association with WR1 on each of these entities, prevent a Datum_target from being related to a Datum_feature via a Shape_aspect_relationship.NOTE\u2014On technical drawings, datum target frames are used to group datum targets into datum target sets., and datum target frames make up datum target set A. Additionally, the two datum target areas (hatched regions are used to indicate datum target areas) that are connected to the and datum target frames constitute datum target set B. Also, the datum target point that is connected to the datum target frame makes up datum target set C.EXAMPLE\u2014In the technical drawing presented in In the DSCDM, a target_id attribute on the Datum_target entity indicates that the use of this attribute is to associate a datum target number with a datum target. However, the placement of this attribute on the Datum_target entity only allows a single datum target number to be associated with a datum target, which is not surprising as the Part 47 model only allows a datum target to be associated with a single datum. On the other hand, in the DSCDM the placement of the datum_target_number attribute on the Datum_target_usage_in_datum_target_set entity permits a different datum target number to be assigned to each usage of a datum target in a datum target set.NOTE\u2014On technical drawings, datum target frames are used to group datum targets into datum target sets. Additionally, datum target frames specify datum target numbers by which the datum targets are identified within the datum target sets.The technical drawing presented in and datum target frames is associated with two datum targets sets, C and G. This datum target is identified by a datum target number of \u201c1\u201d when it is associated with datum target set C and is identified by a datum target number of \u201c2\u201d when it is associated with datum target set G.EXAMPLE\u2014The datum target point in The Part 47 model fails to account for the fact that multiple datum target numbers may be associated with a datum target, whereas the DSCDM does account for this fact. In the Part 47 model, the definition for the A composite datum feature is a datum feature that is composed of other features. Neither the Part 47 model nor the DSCDM have an explicit entity that corresponds to a composite datum feature. However, it is of interest to examine how composite datum features may be represented using these two models.Composite_shape_aspect entity, the intent of which is to group Shape_aspects for a purpose. At first glance this seems like a perfect match\u2014a Shape_aspect that is a Datum_feature as well as a Composite_shape_aspect could represent a composite datum feature. This usage of Composite_shape_aspect is even mentioned in a note in clause 4.4.1 of STEP Part 47 [feature_basis_relationship inverse attribute on the Datum_feature entity requires that a Datum_feature be specified as the relating_shape_aspect by exactly one Shape_aspect_relationship. Conversely, the component_relationships inverse attribute on the Composite_shape_aspect entity requires that a Composite_shape_aspect be specified as the relating_shape_aspect by two or more Shape_aspect_relationships (these Shape_aspect_relationships relate the Composite_shape_aspect with the Shape_aspects from which it is composed). This conflict between the two inverse attributes prohibits a Shape_aspect from being both a Datum_feature and a Composite_shape_aspect. The model presented in STEP Part 47 has a Part 47 submitted against the STEP Part 47 DIS documentThe second difference is that in the Part 47 model the Referenced_modified_datum entity is used to associate modifiers with datums, not datum features. On the other hand, in the DSCDM the Datum_feature_material_condition_property entity is used to associate modifiers with datum features. datum reference letter is somewhat of a misnomer, as a datum reference letter actually refers to a datum feature.NOTE\u2014The term symbol following the letter \u201cA\u201d in the feature control frame associates the maximum material principle with datum feature A. Likewise, the symbol following the letter \u201cB\u201d in the feature control frame associates the maximum material principle with datum feature B.EXAMPLE\u2014In The Part 47 model is inconsistent with ASME Y14.5M [NOTE\u2014While the DSCDM supports the application of different modifiers to the datum features of a common datum, as is permitted in ASME Y14.5M, the author of this paper has been unable to find examples of this situation in standards or reference books. Therefore, it is believed that occurrences of this case are probably extremely limited.One may argue that if modifiers are directly associated with datums, as in the Part 47 model, they are indirectly associated with the datum features that establish those datums. However, this contrivance fails in cases in which the requirements are such that all the datum features from which a common datum is established are not to be associated with the same modifier.Dimension_with_specified_datum_system corresponds to a type of dimension that references a datum system specification. The Part 47 model has no entity that is equivalent to the Dimension_with_specified_datum_system entity.Dimension_with_specified_datum_system entity is not completely defined in this paper, as a discussion of dimensions is outside its scope.NOTE\u2014The NOTE\u2014Clause 4.4 of ASME Y14.5M [EXAMPLE\u2014The technical drawing presented in In the DSCDM the Datum, Datum_feature, and Datum_target entities assert that each datum shall be established from one or more datum features or datum targets, that each datum feature shall be used to establish a single datum, and that each datum target shall be used to establish a single datum. In contrast, the DSCDM only requires that the relationship between the datum feature(s) from which a datum is established be specified for those datums that are used to establish a datum system. That is, for a datum not used to establish a datum system (some datums may just be the origin of one or more dimensions), the DSCDM does not require the corresponding Datum to be related to a Datum_feature via the Datum_feature_usage_in_datum entity.Datum, Datum_feature, and Datum_target entities in the Part 47 model). However, as the DSCDM does not require a Datum to be related to either a Datum_feature or a Datum_system, this situation can be represented using the DSCDM.EXAMPLE\u2014In NOTE\u2014Although the datum planes that are labeled \u201cSecond datum plane\u201d and \u201cThird datum plane\u201d in The Part 47 model cannot be used to represent datums that are not directly established from datum features or datum targets. This is because the attributes and rules on the This paper has presented a data model (the DSCDM) that covers the concepts of datum systems, datums, datum features, and datum targets. Furthermore, for comparison purposes, this paper has presented the datum related portions of the data model given in STEP Part 47 . In pres"} +{"text": "Anopheles mosquitoes were first recognised as the transmitters of human malaria in the late 19th Century and have been subject to a huge amount of research ever since. Yet there is still much that is unknown regarding the ecology, behaviour (collectively \u2018bionomics\u2019) and sometimes even the identity of many of the world\u2019s most prominent disease vectors, much less the within-species variation in their bionomics. Whilst malaria elimination remains an ambitious goal, it is becoming increasingly clear that knowledge of vector behaviour is needed to effectively target control measures. A database of bionomics data for the dominant vector species of malaria worldwide has been compiled from published peer-reviewed literature. The data identification and collation processes are described, together with the geo-positioning and quality control methods. This is the only such dataset in existence and provides a valuable resource to researchers and policy makers in this field. Biting location, biting time and host preference will influence how effectively a mosquito can transmit malaria. In addition, understanding the behaviour of the vector guides how best it can be controlled and the likelihood that a particular intervention measure will be successful2. For example, a night feeding, anthropophilic, endophagic and endophilic mosquito is likely to be a highly effective transmitter of malaria . These same characteristics make this vector an ideal candidate for indoor insecticide-based control such as indoor residual spraying (IRS), which targets mosquitoes that preferentially rest indoors, or long-lasting insecticide-treated nets (LLINs), which target those species attracted to humans indoors at night. On the other hand, a species that is zoophilic, exophagic and exophilic would not be impacted by these control methods , but may be vulnerable to outdoor space spraying or insecticidal zooprophylaxis.The behaviour and life history characteristics of a mosquito vector contribute to the relative importance of the species in terms of human malaria transmission4. It is becoming more widely accepted that simply scaling up existing insecticide based intervention methods is insufficient to tackle increasingly resistant vector populations or to impact existing, control avoiding species5\u20137. Spatially explicit, species-specific behavioural data are needed to populate the emerging transmission models that aim to identify the pathways to achieve elimination4.Increasingly malaria researchers are turning to transmission models to predict the impact of control measures on malaria transmission, to focus limited resources toward the most efficient measures of control and to address residual transmission1, and a brief literature survey of vector bionomics was conducted to accompany a series of papers that mapped the ranges of these species9. The survey did not show the proportion of a species showing a particular trait, but instead the proportion of studies reporting the trait for each species. This highlighted two major points. Firstly, a lack of published spatial datasets describing the ecology or behaviour of even the most dominant malaria vectors, and secondly, how much variation in behaviour exists within individual species. A comprehensive search for spatial bionomics data, incorporating behaviour, parasite infection and transmission potential plus other pertinent parameters was therefore conducted (The dominant vector species (DVS) of Africa, the Americas and the Asia-Pacific region have previously been identifiedonducted .The focus of this publication is to present the results of this work; a global, species-specific, temporally and spatially categorised database of the bionomics of the DVS of human malaria.Bionomics data were abstracted from the published literature detailing research studies that included data on:Vector biology; for example parity and longevity;Vector infection and transmission; for example sporozoite rate and entomological inoculation rate;Human biting rate;Vector host preference (quantifiable measures of anthropo- and zoophily);Human biting preference (quantifiable measures of endo- or exophagy);Human biting activity (preferred time of biting);Resting preference (quantifiable measures of endo- or exophily).Regional datasets were created for Africa, the Americas and the Asia-Pacific region within which all data were attributed to species and location. There is no single standard method for measuring each of the above parameters so full details including mosquito collection date, season and sampling method were recorded, where given.8 (date range of field data: 1985\u20132010). To ensure an up-to-date dataset, additional searches using the DVS specific names as search terms were conducted in PubMed10 and Web of Science11 covering literature published from 2010 to May 2013 for the African DVS and August 2014 for the American and Asia-Pacific DVS. Language restrictions were not placed on these searches. Full text digital copies of all publications were obtained. All articles written in English, French, Portuguese and Spanish were read, and those publications with no useful bionomics data were removed. The decision to only include data collected since 1985 was made to ensure that the dataset reflected the current distribution of the DVS and included specimens identified using more up-to-date identification methods and taxonomy1.Publications detailing occurrence data for the DVS were identified from the MAP DVS databaseAn. gambiae but only relying on morphological identification and with no clear indication whether the specimens were considered An. gambiae species or An. gambiae complex).Each article was searched for relevant bionomics data related to both a given location and to one of the vector species in question. Data were extracted as reported in the source document, with no assumptions made, and only tabulated data or values reported in the text were accepted . When possible, bionomics data for individual sibling species were extracted. However, where there was some ambiguity in the species being reported, they were recorded as the species complex . Where given, we captured full species details, however these use the old molecular form classification. Therefore, consider all mentions of An. gambiae to include An. coluzzii and An. gambiae unless specifically stated otherwise. Our dataset also records the previous classification of chromosomal form, where given. On occasion, despite conducting additional identifications to determine sibling species, authors presented their bionomics data for the species complex, this was also recorded as given.In 2013 the Where possible all data reported in the source for a specific location, time and species are combined on a single data line. For example, this means there may be information relating to a vector population\u2019s host preference, sporozoite infection rate and peak biting time all combined on a single row. However, as not all bionomics parameters were reported by every study this also means that there are blank cells on each row. Blanks cells always represent \u2018no data\u2019.Where given, season was recorded. Due to the high influence of season on mosquito behaviour and abundance, when it was not provided it was calculated from the dates given, either in the source or by searching for information detailing when the rainy and dry seasons normally occurred in the specific location. When season has been calculated this is recorded in a separate column, so that users of the dataset are aware that this was not included in the original data source.For the African dataset a search of the accumulated bionomics library was conducted to identify those authors who were most prolific in publishing pertinent bionomics data. These authors were contacted to ask if they had any further, unpublished data they would be able to contribute. Any unpublished data was added into the dataset as above. Authors were also contacted to clarify details that were unclear or to disaggregate data where the source suggested more detail may have been collected in the study, but had not been presented within the published source. Due to time constraints this step was not carried out for the American and Asia-Pacific datasets.9. All additional sites were georeferenced following the same protocol, fully detailed in Hay et al1. In brief, site location was determined by searching for the site name in online gazetteers or other geolocational resources . Site related contextual information provided in the original reference was used to confirm that the correct site had been identified. Data locations were attributed to area types, including point locations (within 10\u2009km2), wide areas (10\u201325\u2009km2), small polygons (25\u2013100\u2009km2) or large polygons (>100\u2009km2). Single sampling points were identified as point locations. However, data were often reported for several sampling sites combined. In this case, sampling locations were determined as a wide areas, small polygons or large polygons depending on the extent of the sampling area. A single set of coordinates for the most central sampling site of the study are used to define the location of the sampling area, with the area type used to give an indication of the geographic spread of the sampling locations.The majority of sites sampled had previously been geolocated in an earlier study mapping the ranges of the DVSWe define a data record as a data point for a unique site-date-species combination. 13 and the VecNet digital library14. Each survey included in the vector bionomics database has been disaggregated to individual sites , individual dates (if the same site was sampled repeatedly) and individual Anopheles DVS. Values were extracted to a database with the following fields:The three regional databases are publicly available online as comma delimited files (Data Citation 1). The data are also available via the Malaria Atlas Project (MAP) websiteSource_ID. Unique source identifier.Country. Country where the study was conducted.Site. Site name.Lat. Latitude in decimal degrees.Long. Longitude in decimal degrees.Area_type. Point (within 10\u2009km2), wide area (10\u201325\u2009km2), small polygon (25\u2013100\u2009km2) or large polygon (>100\u2009km2).Insecticide_control. Indicates whether insecticide based control methods are in place (previously implemented or implemented as part of the referenced study) at the specified location and time period.T: TRUE.F: FALSE.blank if unknown.Control_type. If \u2018TRUE\u2019 above, details the insecticide control method.ITN: insecticide treated nets.IRS: indoor residual spraying.IT curtains: insecticide treated curtains.Coil: coil.Combination: more than one control method used.?: not stated.Month_start. Survey start month.Month_end. Survey end month.Year_start. Survey start year.Year_end. Survey end year.Season_given. Rainy or dry season at the time of the survey, as indicated in the source.Season_calc. Rainy or dry season at the time of the survey, as derived from information on the general seasonal timings provided from the source or elsewhere.Species. The Anopheles species, species complex or subgroup. Also includes molecular form or chromosomal form if reported.ASSI. Additional species-specific information given in the source and provided as a free text field.Id_1. The method used to identify species.Chromosome banding: banding patterns on chromosomes.Cyto: cytological=cell/chromosomal characteristics.DNA: other DNA probing methods without PCR.M: morphological.Palpal ratio: palpal ratio.PCR: Polymerase Chain Reaction amplification techniques.Polytene chromosome: banding patterns on polytene chromosomes.PCR/DNA: PCR combined with DNA probe.Blank: unknown or unreported identification method.Id_2. The second method used to identify species, using same options as above.Biology_sampling_1. The sampling methods used to collect the specimens detailed in the VECTOR BIOLOGY section. Three methods can be listed. If more than three methods have been used, this is indicated as \u2018t\u2019 in the final column.MBI: Human biting indoorsMBO: Human biting outdoorsMB: Human biting (location not specified)ABI: Animal biting indoorsABO: Animal biting outdoorsAB: Animal biting (location not specified)HRI: House resting indoorsILT: Indoor light trapOLT: Outdoor light trapRO: Resting outdoors RO (pit): Resting outdoors in pitsRO (shelter): Resting outdoors in a shelterRO (ani-shelter): Resting outdoors in an animal shelterWinExit: Window exit trapsHBN: Human baited netABN: Animal baited netOdour-trap: Odour trapTent trap: Tent trapCol. Curtains: Colombian curtains?: Sampling method not specifiedBiology_sampling_2. As \u2018Biology_sampling_1\u2019.Biology_sampling_3. As \u2018Biology_sampling_1\u2019.Biology_sampling_n. \u2018t\u2019 indicates that there are more than three sampling methods.Parity_n. The number of parous females detected from the total number examined.Parity_total. The total number of females examined for parity.Parity_percent. The percentage of parous females in the sample: number of parous females/total number examined*100.Daily_survival_rate_percent. The estimated proportion of female mosquitoes alive on day d that are still alive on day d+1.Fecundity. The number of eggs laid per batch.Gonotrophic_cycle_days. The number of days for a female mosquito to go through the reproduce-feeding cycle.Infection_sampling_1. The sampling methods used to collect the specimens detailed in the VECTOR INFECTION RATE section. Three methods can be listed. If more than three methods have been used, this is indicated as \u2018t\u2019 in the final column. As \u2018Biology_sampling_1\u2019.Infection_sampling_2. As \u2018Infection_sampling_1\u2019.Infection_sampling_3. As \u2018Infection_sampling_1\u2019.Infection_sampling_n. \u2018t\u2019 indicates that there are more than three sampling methods.SR_dissection_n. The number of sporozoite infected females detected by dissection from the total number examined.SR_dissection _total. The total number of females dissected for sporozoites.SR_dissection_percent. The percentage of sporozoite infected females detected by dissection in the sample: number of infected females/total number examined*100.SR_CSP_n. The number of sporozoite infected females detected by circumsporozoite protein (CSP) analysis from the total number examined.SR_CSP_Pf_n. The number of P. falciparum specific sporozoite infected females detected by CSP analysis from the total number examined. This field is only included for the Americas and the Asia-Pacific region.SR_CSP_Pv_n. The number of P. vivax (variant not stated or combined) specific sporozoite infected females detected by CSP analysis from the total number examined. This field is only included for the Americas and the Asia-Pacific region.SR_CSP_Pv_210_n. The number of P. vivax variant 210 specific sporozoite infected females detected by CSP analysis from the total number examined. This field is only included for the Americas and the Asia-Pacific region.SR_CSP_Pv_247_n. The number of P. vivax variant 247 specific sporozoite infected females detected by CSP analysis from the total number examined. This field is only included for the Americas and the Asia-Pacific region.SR_CSP_Pm_n. The number of P. malariae specific sporozoite infected females detected by CSP analysis from the total number examined. This field is only included for the Americas and the Asia-Pacific region.SR_CSP_Po_n. The number of P. ovale specific sporozoite infected females detected by CSP analysis from the total number examined. This field is only included for the Americas and the Asia-Pacific region.SR_CSP_total. The total number of females analysed for CSP.SR_CSP_percent. The percentage of sporozoite infected females detected by CSP analysis in the sample: number of infected females/total number analysed*100.SR_CSP_Pf_percent. The percentage of P. falciparum specific sporozoite infected females detected by CSP analysis in the sample: number of P. falciparum specific infected females/total number analysed*100. This field is only included for the Americas and the Asia-Pacific region.SR_CSP_Pv_percent. The percentage of P. vivax (variant not stated or combined) specific sporozoite infected females detected by CSP analysis in the sample: number of P. vivax specific infected females/total number analysed*100. This field is only included for the Americas and the Asia-Pacific region.SR_CSP_Pv_210_percent.The percentage of P. vivax variant 210 specific sporozoite infected females detected by CSP analysis in the sample: number of P. vivax variant 210 specific infected females/total number analysed*100. This field is only included for the Americas and the Asia-Pacific region.SR_CSP_Pv_247_percent.The percentage of P. vivax variant 247 specific sporozoite infected females detected by CSP analysis in the sample: number of P. vivax variant 247 specific infected females/total number analysed*100. This field is only included for the Americas and the Asia-Pacific region.SR_CSP_Pm_percent. The percentage of P. malariae specific sporozoite infected females detected by CSP analysis in the sample: number of P. malariae specific infected females/total number analysed*100. This field is only included for the Americas and the Asia-Pacific region.SR_CSP_Po_percent. The percentage of P. ovale specific sporozoite infected females detected by CSP analysis in the sample: number of P. ovale specific infected females/total number analysed*100. This field is only included for the Americas and the Asia-Pacific region.Oocyst_n. The number of oocyst infected females detected from the total number examined.Oocyst_total. The total number of females examined for oocysts.Oocyst_percent. The percentage of oocyst infected females detected in the sample: number of infected females/total number examined*100.EIR. The entomological inoculation rate. This is the number of infective bites per person per unit time.EIR_period. The unit of time relating to the EIR.Ext_incubation_period_days. The extrinsic incubation period of the malaria parasite in days.Indoor_HBR_sampling. The sampling method used to collect the mosquitoes from which indoor human biting rate is evaluated. As \u2018Biology_sampling_1\u2019.Indoor HBR. The indoor human biting rate; the number of bites per person per unit time.Outdoor_HBR_sampling. The sampling method used to collect the mosquitoes from which outdoor human biting rate is evaluated. As \u2018Biology_sampling_1\u2019.Outdoor HBR. The outdoor human biting rate; the number of bites per person per unit time.Combined_HBR_sampling_1. The sampling methods used to collect the mosquitoes from which human biting rate is evaluated where data are amalgamated from more than one method . Three methods can be listed. If more than three methods have been used, this is indicated as \u2018t\u2019 in the final column. As \u2018Biology_sampling_1\u2019.Combined_HBR_sampling_2. As \u2018Combined_HBR_sampling_1\u2019.Combined_HBR_sampling_3. As \u2018Combined_HBR_sampling_1\u2019.Combined_HBR_sampling_n. \u2018t\u2019 indicates that there are more than three sampling methods.Combined_HBR. The human biting rate evaluated from the data from amalgamated sampling methods.HBR_unit. The unit time for the HBR data.Indoor_host_sampling. The indoor sampling method used to collect the mosquitoes from which indoor host preference is evaluated. As \u2018Biology_sampling_1\u2019.Indoor_host_n. The number of mosquitoes positively indicating a measure of host preference from the total number collected indoors.Indoor_host_total. The total number of mosquitoes sampled indoors examined for measures of host preference.Indoor host. The measure of host preference from indoor sampled mosquitoes.Outdoor_host_sampling. The outdoor sampling method used to collect the mosquitoes from which outdoor host preference is evaluated. As \u2018Biology_sampling_1\u2019.Outdoor_host_n. The number of mosquitoes positively indicating a measure of host preference from the total number collected outdoors.Outdoor_host_total. The total number of mosquitoes sampled outdoors examined for measures of host preference.Outdoor host. The measure of host preference from outdoor sampled mosquitoes.Combined_host_sampling_1. The sampling methods used to collect the mosquitoes from which host preference is evaluated where data are amalgamated from more than one method, or where the method used is unclear. Three methods can be listed. If more than three methods have been used, this is indicated as \u2018t\u2019 in the final column. As \u2018Biology_sampling_1\u2019.Combined_host_sampling_2. As \u2018Combined_host_sampling_1\u2019.Combined_host_sampling_3. As \u2018Combined_host_sampling_1\u2019.Combined_host_sampling_n. \u2018t\u2019 indicates that there are more than three sampling methods.Combined_host_n. The number of mosquitoes positively indicating a measure of host preference collected by a combination of sampling methods.Combined_host_total. The total number of mosquitoes sampled by a combination of sampling methods, examined for measures of host preference.Combined_host. The measure of host preference from mosquitoes sampled by a combination of methods.Host_unit. Indicates the measure used to identify host preference.HBI (%): Human Blood Index as a percentage.ABI (%): Animal Blood Index as a percentage.HBI : Human Blood Index as a percentage calculated from data given in source.ABI : Animal Blood Index as a percentage calculated from data given in source.AI: \u2018Anthropophilic Index\u2019, a measure of attraction to humans not included above, for example % individuals attracted to human baited trap over total collected in both human and cattle baited trap, calculated from count data.NB. the unit \u2018HBI \u2019 and \u2018ABI \u2019 is where the source provides the raw data needed to calculated HBI or ABI but does not actually present these data. The unit indicates that the calculation has been done here.Other_host_sampling_1. The sampling methods used to collect the mosquitoes from which host preference is evaluated where additional data are presented examining host preference. Three methods can be listed. If more than three methods have been used, this is indicated as \u2018t\u2019 in the final column. As \u2018Biology_sampling_1\u2019.Other_host_sampling_2. As \u2018Other_host_sampling_1\u2019.Other_host_sampling_3. As \u2018Other_host_sampling_1\u2019.Other_host_sampling_n. \u2018t\u2019 indicates that there are more than three sampling methods.Other_host_n. The number of mosquitoes positively indicating a measure of host preference.Other_host_total. The total number of mosquitoes examined for measures of host preference.Other_host. The measure of host preferenceOther_host_unit. As \u2018Host_unit\u2019.Indoor_number_sampling_nights_biting. The sampling effort, in number of \u2018man nights\u2019, to collect the indoor biting data.Indoor_biting_sampling. The sampling method used to collect the indoor mosquitoes from which biting location preference is determined. As \u2018Biology_sampling_1\u2019.Indoor_biting_n. The number of mosquitoes found biting indoors.Indoor_biting_total. The total number of indoor and outdoor biting mosquitoes.Indoor_biting. The percentage or ratio of mosquitoes found biting indoors.Outdoor_number_sampling_nights_biting. The sampling effort, in number of \u2018man nights\u2019, to collect the outdoor biting data.Outdoor_biting_sampling. The sampling method used to collect the outdoor mosquitoes from which biting location preference is determined. As \u2018Biology_sampling_1\u2019.Outdoor_biting_n. The number of mosquitoes found biting outdoors.Outdoor_biting_total. The total number of indoor and outdoor biting mosquitoes.Outdoor_biting. The percentage or ratio of mosquitoes found biting outdoors.Indoor_outdoor_biting_units. Indicates the data unit for the indoor and outdoor biting data.I:O: Indoor to outdoor ratio.%: % biting indoors (or outdoors) given in source.%calc: % biting indoors (or outdoors) calculated from data given in source.NB. the unit \u2018%calc\u2019 is where the source provides the raw data for indoor and outdoor biting densities but does not calculate the percentage indoors/outdoors. The unit indicates that the calculation has been done here.Indoor_number_sampling_nights_biting_activity. The sampling effort, in number of \u2018man nights\u2019, relevant to indoor biting activity data.Indoor_1830_2130. \u2018t\u2019 given here if indoor biting activity peaks in the first quarter of the night, includes dusk biting.Indoor_2130_0030. \u2018t\u2019 given here if indoor biting activity peaks in the second quarter of the night.Indoor_0030_0330. \u2018t\u2019 given here if indoor biting activity peaks in the third quarter of the night.Indoor_0330_0630. \u2018t\u2019 given here if indoor biting activity peaks in the fourth quarter of the night, includes dawn biting.Outdoor_number_sampling_nights_biting_activity. The sampling effort, in number of \u2018man nights\u2019, relevant to outdoor biting activity data.Outdoor_1830_2130. \u2018t\u2019 given here if outdoor biting activity peaks in the first quarter of the night, includes dusk biting.Outdoor_2130_0030. \u2018t\u2019 given here if outdoor biting activity peaks in the second quarter of the night.Outdoor_0030_0330. \u2018t\u2019 given here if outdoor biting activity peaks in the third quarter of the night.Outdoor_0330_0630. \u2018t\u2019 given here if outdoor biting activity peaks in the fourth quarter of the night, includes dawn biting.Combined_number_sampling_nights_biting_activity. The sampling effort, in number of \u2018man nights\u2019, relevant to biting activity data where data are presented for both indoor and outdoor biting combined.Combined_1830_2130. \u2018t\u2019 given here if combined biting activity peaks in the first quarter of the night, includes dusk biting.Combined_2130_0030. \u2018t\u2019 given here if combined biting activity peaks in the second quarter of the night.Combined_0030_0330. \u2018t\u2019 given here if combined biting activity peaks in the third quarter of the night.Combined_0330_0630. \u2018t\u2019 given here if combined biting activity peaks in the fourth quarter of the night, includes dawn biting.Indoor_resting_sampling. Indoor sampling method used to collect the mosquitoes to assess indoor resting behaviour. As \u2018Biology_sampling_1\u2019.Indoor_unfed. Total number of unfed mosquitoes in the sample collected indoors.Indoor_fed. Total number of fed mosquitoes in the sample collected indoors.Indoor_gravid. Total number of gravid mosquitoes in the sample collected indoors.Indoor_total. Total number of mosquitoes in the sample collected indoors, including unfed, fed and gravid females.Outdoor_resting_sampling. Outdoor sampling method used to collect the mosquitoes to assess outdoor resting behaviour. As \u2018Biology_sampling_1\u2019.Outdoor_unfed. Total number of unfed mosquitoes in the sample collected outdoors.Outdoor_fed. Total number of fed mosquitoes in the sample collected outdoors.Outdoor_gravid. Total number of gravid mosquitoes in the sample collected outdoors.Outdoor_total. Total number of mosquitoes in the sample collected outdoors, including unfed, fed and gravid females.Other_resting_sampling. Sampling methods relevant to \u2018other\u2019 data. These columns are used when additional sampling is reported, for example if indoor and outdoor resting mosquitoes are listed in the previous sections, but the source also reports data from a third sampling method such as mosquitoes resting in animal sheds. As \u2018Biology_sampling_1\u2019.Other_unfed. Total number of unfed mosquitoes in the sample collected by additional/\u2018other\u2019 methods.Other_fed. Total number of fed mosquitoes in the sample collected by additional/\u2018other\u2019 methods.Other_gravid. Total number of gravid mosquitoes in the sample collected by additional/\u2018other\u2019 methods.Other_total. Total number of mosquitoes in the sample collected by additional/\u2018other\u2019 methods, including unfed, fed and gravid females.Resting_unit. The unit relating to the indoor, outdoor or other resting data.Count: raw count data.%: percentage.Per man hour: total number collected divided by time spent collecting in hours.Fed:gravid: fed to gravid ratio, total number of fed specimens divided by total number of gravid specimens.Citation. The data source.PubMed_ID. PubMed ID, when available.Bionomics data have been recorded by a large number of researchers, often using different sampling methods and reporting the data using different metrics. Due to the complicated and non-standard nature of the data, all data were reviewed and checked by a second data abstractor. The data were also checked to ensure that recorded values were within the possible ranges (for example between 0 and 100 for parameters recorded as percentages) and that all values had associated units.16 and vector occurrence9 the geolocation coordinates for these sites had already been confirmed. Coordinates were also plotted to ensure that they fall on land and in the correct country.To ensure all locations were accurately geo-located these were again confirmed by a second data abstractor. As many of the data sources identified in this project had previously been included in mapping projects on parasite rateThis is the first time that a comprehensive global database has been compiled of published bionomics data for the DVS of human malaria. The dataset described here will be of value to researchers when assessing the likely impact of vector control measures on malaria transmission and to policy makers when deciding how malaria control resources are allocated. Searching the dataset for data related to a specific DVS, geographic location or bionomic parameter will allow the user to quickly identify the available data, and to link this back to the original data source. In addition, this dataset can be used to identify the current knowledge gaps in the behaviour and life history characteristics of the DVS across their geographic ranges.The published studies did not use consistent units for each of the parameters of interest, and no attempt has been made to standardise the units as part of this work. It is vitally important that the values for each parameter are not treated as single dataset that used a common methodology and unit. Users are strongly advised to examine the sampling methods and units fields provided for each parameter when making use of the data.We will be using these data to test specific hypotheses relating to the DVS of Africa, including the presence of an east-west behavioural cline; whether insecticide control has caused a continent-wide non species-specific shift to exophagy amongst previously endophagic species; whether insecticide control has caused a continent-wide non species-specific shift in biting times amongst night biting species; and whether the DVS are really behaviourally flexible, or if the observed plasticity actually relates to different sub-species or sibling species within a complex.How to cite this article: Massey, N.C. et al. A global bionomic database for the dominant vectors of human malaria. Sci. Data 3:160014 doi: 10.1038/sdata.2016.14 (2016)."} +{"text": "Damastes. Molecular data include the nuclear genes histone H3 (H3) and 28S ribosomal RNA (28S rRNA), mitochondrial genes cytochrome c oxidase subunit I (COI) and 16S ribosomal RNA (16S rRNA) were sequenced via Sanger sequencing by J.A. Gorneau. Life history data were collected in the field and in the lab by L.S. Rayor and include data on age at sexual maturity, lifespan, social classification, egg sac shape, how the egg sac is attached or carried, retreat location, retreat modification, retreat size relative to adult female body size, approximate mean body mass, and mean cephalothorax width. Morphological data on Deleninae and one Damastes sp. were scored by C.A. Rheims and includes information on the following characters: prosoma , anterior median eye (AME) diameter, AME-AME and PME-PME interdistances), male palp (embolic sclerite (PS), conductor sclerotized base (SB), tegular apophysis (TA), flange (f)) and female epigyne and vulva , spermathecal sacs (SS)). These data were used to clarify relationships among the Australian endemic Deleninae, as well as global patterns in sparassid evolution. The data demonstrate phylogenetic patterns in life history, social evolution, and natural history among the sparassids. These data contribute to future comparative research on sparassid systematics, evolution, and behavior. This data article complements a research article published in Molecular Phylogenetics and Evolution This article on biodiversity and life history data in huntsman spiders (Araneae: Sparassidae) includes the following: molecular data deposited on GenBank for 72 individuals representing 27 species in seven subfamilies, life history and behavioral data on 40 huntsman species from over two decades of observations, and morphological data for 26 species in the subfamily Deleninae as well as an undescribed representative of the genus Specifications Table\u2022While focused molecular investigations and life history datasets are not uncommon, providing an integrated dataset with both molecular and life history data allows for examination of trends in the context of evolution.\u2022These data will be of particular use to arachnologists, evolutionary biologists, systematists, behavioral ecologists, and biogeographers looking to explore trends in comparative social evolution and life history, morphology, and more generally, the evolution of the Sparassidae and Australian endemism.\u2022Molecular data deposited in GenBank will be available for future studies of molecular evolution and phylogenetic analyses, as well as for species-based identification using the barcode gene cytochrome c oxidase subunit I (COI).\u2022Voucher exemplar specimens for molecular, morphological, and behavioral data are deposited in museums for replicability and additional analysis, including sequencing of additional loci in the future.\u2022Morphological and behavioral data will inform individuals designing their own character matrices for members of the Sparassidae, as well as provide a basis from which to designate and define certain life history characteristics.\u2022Long-term datasets presenting total evidence character traits are a rich source for researchers to design their own character matrices for comparative study.1These data present a detailed compilation of molecular, morphological, life history, and behavioral character states for representatives of 37 of the 89 genera of Sparassidae, the eleventh-most speciose spider family, and focus on taxa endemic to Australia. We provide tables including accession numbers for sequences contributed to GenBank, morphological and life history character matrices, and input files and code for each analysis. We also include R code in R Markdown (.Rmd) format for the phylogenetic comparative methods used in Gorneau et\u00a0al. 22.1Primers_and_PCR_protocols.xlsx \u2014 Excel file with information on primers used in this study. C1-N-2776 was used for samples for which the HCO/LCO combination of primers did not amplify adequate DNA. Second sheet of file includes information on PCR protocols used.Voucher_information.xlsx \u2014 Excel file with information on vouchers representing exemplars for the molecular data contributed in this study. Specimens deposited in the National Museum of Natural History arachnology collections.New_data_generated_GenBank.xlsx \u2014 Excel file that corresponds to GenBank_sequences.xlsx \u2014 Excel file with accession numbers for sequences downloaded from GenBank.2.2IQTREE_Sparassidae_10_Nov_2021.phy \u2014 Input file for IQ-TREE inference containing concatenated alignment for all four gene sequences of taxa used in this analysis.IQTREE_partition.nex \u2014 Input NEXUS file for IQ-TREE inference containing details on partitioning of four genes in concatenated dataset IQTREE_Sparassidae_10_Nov_2021.phy.IQTREE_10_Nov_2021.log \u2014 Output .log file for IQ-TREE inference containing details of run progress and models of molecular evolution selected.IQTREE_10_Nov_2021.contree \u2014 Output IQ-TREE phylogeny with results of 10,000 ultrafast bootstrap replicates as nodal support values.2.3RAxML_Sparassidae_10_November.phy.raxml.startTree \u2014 Starting tree as inferred by IQ-TREE. The same tree file as IQTREE_10_Nov_2021.contree.RAxML_22_Nov_2021_partition.txt \u2014 inference containing details on partitioning of four genes in concatenated dataset.RAxML_Sparassidae_10_Nov_2021.phy \u2014 Input file for RAxML inference containing concatenated alignment for all four gene sequences of taxa used in this analysis.RAxML_Sparassidae_10_Nov_2021.raxml.log \u2014 Output .log file for RAxML inference containing details of run progress and models of molecular evolution selected.RAxML_Sparassidae_10_Nov_2021.raxml.bestTree \u2014 Best maximum likelihood tree as inferred by RAxML with bootstrap values as nodal support values.2.4MrBayes_21_Dec_2021.nex \u2014 Input NEXUS file for MrBayes inference with concatenated dataset of four genes, MrBayes block with information about sequence partitions and models used, as well as IQ-TREE phylogeny used as starting tree.MrBayes_21_Dec_2021_stout.txt \u2014 Output file with information on MrBayes run including average standard deviation of split frequencies for duration of run.MrBayes_21_Dec_2021.tre \u2014 Bayesian inference phylogeny output of MrBayes with posterior probability values as nodal support values.2.5Tree_convergence_huntsman.Rmd \u2014 R Markdown file with code to analyze convergence between IQ-TREE inferences and MrBayes and RAxML inferences. Employs the use of the phytools R package.2.6BEAST_28_Jan_2022_mono.xml \u2014 Input file for BEAST 2.6.0 containing information about priors for divergence dating including models of molecular evolution and fossil outgroup calibrations. This file was run independently three times through BEAST to generate three independent .log and files that were then combined into BEAST_28_Jan_2022_combined_runs_123.log in the program LogCombiner.BEAST_28_Jan_2022_combined_runs_123.log \u2014 The combined .log files for the three independent runs of BEAST. Contains information about effective sample sizes to determine the quality of the run in BEAST 2.6.0. Examined using Tracer v.1.7.1BEAST_28_Jan_combined_123_TA_25.tre \u2014 Maximum clade credibility tree from three independent BEAST runs identified by TreeAnnotator, with input merged .trees files from three runs and burn-in percentage 25%. Node heights set at keep target heights.2.7Life_history_data.xlsx \u2014 Excel file containing two spreadsheets: one named Character_States with information about character states for nine life history characters, and one named Character_Matrix with character matrix of life history attributes.lh_matrix.csv \u2014 .csv file with same character matrix from Life_history_data.xlsx Excel file, for use in stochastic character mapping and D-test code in Stochastic_character_mapping_and_D-test_huntsman.Rmd.lifeHistorysubName.csv \u2014 .csv file with substitutions of names from IQ-TREE phylogeny tip labels to names for stochastic character mapping; essentially removes specific identifiers for ease of visualization in Stochastic_character_mapping_and_D-test_huntsman.Rmd code.Stochastic_character_mapping_and_D-test_huntsman.Rmd \u2014 R Markdown file containing code for stochastic character mapping of life history and D-test correlation analysis. Requires the use of the following R packages: corHMM, phylotools, and phytools. Also available on GitHub.Stochastic_character_mapping_model_selection.csv \u2014 .csv file with output of model selection including Akaike information criterion (AIC) values from Stochastic_character_mapping_and_D-test_huntsman.Rmd code among the following models: equal rates (ER), symmetric (SYM), and all rates different (ARD) for all life history characters.D_test_p_values.xlsx \u2014 Excel file with output partitioned by individual sheet showing the p-values of the D-test analysis in a pairwise matrix. The numeric codes for character states correspond to the following file: Life_history_data.xlsx.Stochastic_character_mapping_posterior_probabilities.xlsx \u2014 Posterior probabilities of stochastic character mapping analyses organized by character and then by individual node. Node numbers correspond to Stochastic_character_mapping_w_node_numbers.Stochastic_character_mapping_w_node_numbers.pdf \u2014 PDF file with results of stochastic character mapping analyses with node numbers so Stochastic_character_mapping_posterior_probabilities.xlsx can be consulted to examine specific posterior probabilities for specific life history characters by node.2.8Morphology_data.xlsx \u2014 Excel file with four sheets: Character_States describing the characters and character states in the Character_Matrix sheet; the Character_Matrix sheet, which is the basis for the input file morpho_matrix.csv; Morphological_Vouchers sheet with information on specimens directly examined for morphological character scoring; and Morpohology_From_Literature for species scored with the use of literature, either instead of direct examination or in tandem with direct observation.morpho_matrix.csv \u2014 .csv file of character matrix from Morphology_data.xlsx slightly altered such that the questionable characters were marked as the characters they are likely to be for ease of plotting using the code in Morphology_huntsman.Rmd. The uncertainty was then readded in Adobe Illustrator.morphosubName.csv \u2014 .csv file with substitutions of names from IQ-TREE phylogeny tip labels to names for mapping of morphological data matrix from morpho_matrix.csv; essentially removes the specific identifiers for ease of visualization in Stochastic_character_mapping_and_D-test_huntsman.Rmd code.Morphology_huntsman.Rmd \u2014 R Markdown file containing code for mapping data matrix of morphological data Deleninae\u00a0+\u00a0Damastes. Input files: morpho_matrix.csv, morphosubName.csv, IQTREE_10_Nov_2021.contree. Employs the use of the following packages: corHMM, phylotools, phytools, and RColorBrewer. Also available on GitHub.Male_genitalia.tif \u2013 Male, left palp, ventral view. A: Beregama cordata . B: Delena cancerides Walckenaer, 1837. C: Holconia flindersi Hirst, 1991. D: Isopeda villosa L. Koch, 1875. E: Isopedella leai . F: Neosparassus salacius . G: Pediana regina . H: Typostola barbata . I: Zachria flavicoma L. Koch, 1875. Scale lines: 1 mm. F\u00a0=\u00a0tegular flange; PS\u00a0=\u00a0palpal embolic sclerite; SB\u00a0=\u00a0conductor sclerotized base; TA\u00a0=\u00a0Deleninae tegular apophysisFemale_genitalia.tif \u2013 Female, genitalia. A\u2013B: Delena cancerides Walckenaer, 1837 . C\u2013D: Holconia flindersi Hirst, 1991 . E\u2013F: Isopeda villosa L. Koch, 1875 . G\u2013H: Isopedella conspersa . I\u2013J: Isopedella leai . K\u2013L: Pediana regina . M\u2013N: Typostola barbata . O\u2013P: Zachria flavicoma L. Koch, 1875 . Scale lines: 1 mm. ES\u00a0=\u00a0epigynal sclerite; SS\u00a0=\u00a0spermathecal sac.33.1Gryllodes sigillatus and Acheta domesticus), houseflies (Musca domestica), calliphorid flies , and fruit flies (Drosophila spp.). A table with specimen information and voucher information for exemplars of molecular work is included in the Zenodo dataset (Voucher_information.xlsx). Legs from individuals were stored in ethanol and placed in a -20\u00b0C freezer prior to DNA extraction.Behavior and life history traits for each species were observed in the field and/or laboratory. Spiders were fed a diet of crickets (3.2Primers_and_PCR_protocols.xlsx). PCR products were visualized using gel electrophoresis on a 1% agarose gel with 2 ml of DNA and 2 ml GelRed dye under a BioRad UV transilluminator and imaged using the ImageLab\u2122 software . PCR products were purified using ExoSAP-IT to remove remaining primers and dNTPs. Purified PCR samples were quantified using a Qubit 4 fluorometer and Biotium High Sensitivity AccuGreen dye to determine the quantity of DNA after purification and immediately prior to sequencing. Samples were cycle-sequenced in the forward and reverse directions and diluted based on the DNA concentration for full-service (post-PCR purification) sequencing at the Cornell Genomics Facility .A total of 54 samples were extracted in fall 2019 by JAG using the QIAGEN DNeasy PowerSoil Kit . A single adult leg or whole bodies of immatures were used for DNA extraction. An additional 30 samples were extracted by Dr. Ingi Agnarsson's lab group at the University of Vermont using the QIAGEN DNeasy Tissue Kit . DNA from these extractions was then amplified using polymerase chain reaction (PCR) on a BioRad 96-well C1000 Touch Thermal Cycler for two mitochondrial genes (cytochrome c oxidase subunit I (COI) and 16S ribosomal RNA (16S rRNA) and two nuclear genes (histone H3 (H3) and 28S ribosomal RNA (28S rRNA). In short, PCR reactions involved 6.5 mL water, 12.5 mL EconoTaq\u00ae PLUS Master Mix , 2.5 mL each of forward and reverse primer (10 mM), and 1 mL of DNA for each 25 mL reaction. The primers used for each gene in this study, and PCR conditions for each gene can be found in the Zenodo dataset (3.3GenBank_sequences.xlsx).In addition, sequences for a total of 201 samples were downloaded from GenBank , Uloborus diversus Marx 1898 ), Oecobius Blackwall 1862 (Oecobiidae), Uroctea durandi ((Latreille 1809); Oecobiidae), Peucetia viridans ((Hentz 1832); Oxyopidae), Dolomedes tenebrosus Hentz 1844 (Pisauridae), Salticus scenicus , Selenops muehlmannorum J\u00e4ger & Praxaysombath 2011 (Selenopidae), and Tibellus chamberlini Gertsch 1933 (Thomisidae). The IQ-TREE inference was used as a starting tree for inferences in RAxML and MrBayes with the concatenated sequences partititioned by gene to see if the tree topologies were comparable Aligned sequences were partitioned and exported from Mesquite in .phy format for analysis in IQ-TREE version 1.6.12 3.6Zamilia aculeopectens Wunderlich 2015 (Oecobiidae) for the node representing Oecobiidae, Oxyopes succini Petrunkevitch 1958 (Oxyopidae) for Oxyopidae\u00a0+\u00a0Pisauridae, Almolinus ligula Wunderlich 2004 for the node containing the outgroups Salticidae\u00a0+\u00a0Thomisidae, and \u2018Selenops\u2019 sp. indet. Wunderlich 1988 (Selenopidae) for the node with outgroups Selenopidae sister to Salticidae\u00a0+\u00a0Thomisidae Zamilia aculeopectens, an exponential distribution was set with an offset of 98.17 and a mean of 0.32. For Oxyopes succini and Almolinus ligula, an exponential distribution was set with an offset of 43.0 my, and a mean of 1.3. For \u2018Selenops\u2019 sp. indet., an exponential distribution was set with an offset of 53.0 my, and a mean of 0.8. As such, these analyses were conducted in accordance with the latest work on fossil calibration of phylogenies by Magalh\u00e3es et\u00a0al. An estimation of divergence time was conducted in BEAST using fossil calibrations recommended for use by Magalh\u00e3es et\u00a0al. 3.7Life_history_data.xlsx).From 2002 \u2013 2021, LSR collected life history and behavioral data from 40 sparassid species, with emphasis on the endemic Australian Deleninae. Data was collected in the field and in her laboratory at Cornell University (USA). Life history variables included: mother-offspring dynamics and sociality, egg sac structure, how the egg sac was attached to the retreat or carried, retreat type, modifications to the retreat, adult female body mass and cephalothorax width, age at sexual maturity, and lifespan. These character states are outlined in the Zenodo dataset , \u2018subsocial\u2019 species dispersed between four \u2013 five weeks (third or fourth instar), \u2018prolonged subsocial\u2019 species remained in mother-offspring groups for five to twelve months prior to dispersing.Three types of egg sacs were observed in the species studied: \u2018plastered\u2019 with a ground sheet silked onto the substrate and the rest of the sac built onto that attached sheet, a \u2018lenticular\u2019 (a relatively flat round disc), and a \u2018spherical\u2019 shape. Egg sacs varied in support structure among the sparassids studied: egg sacs were either \u2018completely adhered\u2019 to the substrate and immobile, \u2018tethered\u2019 by guy-lines of silk and relatively immobile, or actively \u2018carried\u2019 by the adult female under her venter. Adhered egg sacs were only accessible on one side, while tethered and carried egg sacs were relatively accessible on both sides. The spiders in this analysis used retreats under tree \u2018bark\u2019 or in small hollows in trees, \u2018rocks\u2019, in \u2018dead foliage\u2019, in \u2018living foliage\u2019, or in the open without a retreat (\u2018in the open\u2019). Modifications included either \u2018silk bonds\u2019 whch are repeated short silken swaths that bind the bark/rock/leaves together, effectively limiting access to the retreat, or a small \u2018silken cage\u2019 that completely surrounds the female and her egg sac forming a retreat, or \u2018none\u2019 in which there was no silk modification of the retreat.Additionally, parameters of body size , age of female at sexual maturity, and average life span in captivity were collected.3.8To investigate the evolution of life history in the context of phylogeny, the inferred IQ-TREE phylogeny tips were trimmed using the keep.tip function in the R package phytools to the 40 species for which life history data as collected by LSR existed 3.9Damastes were tabulated in a matrix. Character scoring was based on direct examination of available specimens , and when unavailable, from literature Male_genitalia.tif). Female epigynes were dissected and illustrated in ventral and dorsal views. In dorsal view illustrations, the hyaline part of the copulatory ducts was omitted .Relevant morphological data for the endemic Australian Deleninae and This work is consistent with the ethical requirements and standards for publication. This study did not include any human subjects, data collected from social media platforms, or animal experiments requiring approval. The authors have no conflict of interest to disclose.Jacob A. Gorneau: Conceptualization, Data curation, Formal analysis, Funding acquisition, Investigation, Methodology, Project administration, Software, Validation, Visualization, Writing \u2013 original draft, Writing \u2013 review & editing. Linda S. Rayor: Conceptualization, Data curation, Formal analysis, Funding acquisition, Investigation, Methodology, Project administration, Resources, Supervision, Validation, Writing \u2013 original draft, Writing \u2013 review & editing. Cristina A. Rheims: Data curation, Formal analysis, Funding acquisition, Investigation, Methodology, Resources, Supervision, Validation, Writing \u2013 review & editing. Corrie S. Moreau: Conceptualization, Formal analysis, Funding acquisition, Investigation, Methodology, Project administration, Resources, Supervision, Writing \u2013 review & editing.The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper."} +{"text": "Using genomics, bioinformatics and statistics, herein we demonstrate the effect of statewide and nationwide quarantine on the introduction of SARS-CoV-2 variants of concern (VOC) in Hawai\u2019i. To define the origins of introduced VOC, we analyzed 260 VOC sequences from Hawai\u2019i, and 301,646 VOC sequences worldwide, deposited in the GenBank and global initiative on sharing all influenza data (GISAID), and constructed phylogenetic trees. The trees define the most recent common ancestor as the origin. Further, the multiple sequence alignment used to generate the phylogenetic trees identified the consensus single nucleotide polymorphisms in the VOC genomes. These consensus sequences allow for VOC comparison and identification of mutations of interest in relation to viral immune evasion and host immune activation. Of note is the P71L substitution within the E protein, the protein sensed by TLR2 to produce cytokines, found in the B.1.351 VOC may diminish the efficacy of some vaccines. Based on the phylogenetic trees, the B.1.1.7, B.1.351, B.1.427, and B.1.429 VOC have been introduced in Hawai\u2019i multiple times since December 2020 from several definable geographic regions. From the first worldwide report of VOC in GenBank and GISAID, to the first arrival of VOC in Hawai\u2019i, averages 320 days with quarantine, and 132 days without quarantine. As such, the effect of quarantine is shown to significantly affect the time to arrival of VOC in Hawai\u2019i. Further, the collective 2020 quarantine of 43-states in the United States demonstrates a profound impact in delaying the arrival of VOC in states that did not practice quarantine, such as Utah. Our data demonstrates that at least 76% of all definable SARS-CoV-2 VOC have entered Hawai\u2019i from California, with the B.1.351 variant in Hawai\u2019i originating exclusively from the United Kingdom. These data provide a foundation for policy-makers and public-health officials to apply precision public health genomics to real-world policies such as mandatory screening and quarantine. Hawai\u2019i has experienced unique epidemics within the coronavirus disease 2019 (COVID-19) pandemic, in that Pacific Islanders, which account for 4% of the population, once accounted for nearly 30% of COVID-19 cases Reviewers' comments:Reviewer's Responses to Questions Comments to the Author1. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1:\u00a0YesReviewer #2:\u00a0Yes**********\u00a0 2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1:\u00a0YesReviewer #2:\u00a0Yes**********\u00a0 3. Have the authors made all data underlying the findings in their manuscript fully available?PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data\u2014e.g. participant privacy or use of data from a third party\u2014those must be specified. The Reviewer #1:\u00a0YesReviewer #2:\u00a0Yes**********\u00a0 4. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.Reviewer #1:\u00a0YesReviewer #2:\u00a0Yes**********\u00a0 5. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. Reviewer #1:\u00a0This paper focuses on analyzing the VOCs that have been found in Hawai\u00ed and performs an analysis of the VOC variants to determine their point of origin. The authors also analyze the case numbers during quarantine and post quarantine in an attempt to demonstrate the efficacy of quarantine in delaying the entry of VOC to Hawai\u00ed, which was, (unsurprisingly) confirmed through analysis and comparison with Utah.The strength of the paper is the thoroughness of the analysis of the genomic data available for the VOCs in Hawaii and the comparison to the VOCs worldwide from banked genomic data. The methods and the analysis of the genomic data is very well presented and explained.What I feel the authors could improve is the background information to help the general reader better understand the significance and impact of the data presented. In particular, I would suggest that the authors rewrite the introduction to describe the general epidemiological trends of COVID infection in Hawai\u00ed, and also provide more details of what is meant by \u2018quarantine\u2019 as this has different guidelines in different countries. This would then provide the readers with a better background heading into the core findings and be able to better appreciate the findings. In the intro line 52 to 63 reads like content better suited to the discussion than the introduction?I would also suggest that in terms of the discussion there are a few other points the authors may wish to briefly mention in the writing and discussion \u2013 it is of course of great epidemiological significance to identify the source of infection to understand the pattern of infection and global spread, however I would argue that the authors assertions (line 467) that the source of infection must be ascertained before steps can be taken may be overstating the case as by that time the case is already in, and it may be more appropriate to argue that understanding the origin of cases may be a reason to look at the processes in that country or in the infection control measures in place in that country for review? With regard to line 367 where the authors have highlighted that the highest number came from California I would suggest inferring some suggestions as to why \u2013 did California have different regulations on COVID control? Or was it because more people entering Hawai\u00ed were from California? Ideas about this then provide more guidance to public health measures at appropriate points in the chain of transmission. In addition, while limiting case numbers is of paramount concern, economic and social considerations also are a factor in deciding on the measure to implement - meaning the data and statistics here are a key consideration, but they are not the only ones.Reviewer #2:\u00a0dear author,this paper is much appreciable and it gives the origin and spread of variants of SARS Cov-2 in different areas.methods should have been simplified with flowchart or something. then it would be easy to reproduce by some other .**********\u00a0 6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.If you choose \u201cno\u201d, your identity will remain anonymous but your review may still be made public.Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Yes:\u00a0Priyia PusparajahReviewer #1:\u00a0Reviewer #2:\u00a0No**********https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at\u00a0figures@plos.org. Please note that Supporting Information files do not need this step.
While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool,\u00a0 18 Oct 2022Editor Comments 1 (09/26/2022):Comment 1:1. Thank you for your response regarding the potential copyright of your Figures. We note that you have contacted the copyright holder directly. Given what we have seen in the postscript, their approval should be enough to proceed.At this time, please upload screenshots of your email correspondence with the copyright holder with your resubmission, and this should be good to proceed.Response 1:We thank the editors for this comment. Figures generated with the usmap package have been restored and the email correspondence with the copyright holder has been included.Editor Comments 2 (09/09/2022): Comment 1:1. We note that the grant information you provided in the \u2018Funding Information\u2019 and \u2018Financial Disclosure\u2019 sections do not match.When you resubmit, please ensure that you provide the updated Funding Information.Response 1: We thank the editor for this comment. The Funding Information has been updated to match the Financial Disclosure._________________________________________________________________________Comment 2:2. We note that several of your files are duplicated on your submission. Please remove any unnecessary or old files from your revision, and make sure that only those relevant to the current version of the manuscript are included.Response 2: We thank the editor for this comment. The duplicate files have been removed. The old files have been removed. _________________________________________________________________________Comment 3:3. Thank you for your response regarding the potential copyright of your Figures. Unfortunately, at this time, it appears that the package usmap uses the GPL license, which is not compatible with our CC-BY 4.0 license. As such, please note the below prompts:A. Was the usmap package used for both Figure 2 and Figure 3?http://journals.plos.org/plosone/s/file?id=7c09/content-permission-form.pdf):B. For any Figure that used the usmap package, we will require specific consent from the copyright holder to publish these images in PLOS ONE, under the CC BY 4.0 license. To seek permission from the copyright owner to publish your map figures under the specific Creative Commons Attribution License (CCAL), CC BY 4.0, please contact them with the following text and PLOS ONE Request for Permission form (http://creativecommons.org/licenses/by/4.0/). Please be aware that this license allows unrestricted use and distribution, even commercially, by third parties. Please reply and provide explicit written permission to publish XXX under a CC BY license.\u201d\u201cI request permission for the open-access journal PLOS ONE to publish XXX under the Creative Commons Attribution License (CCAL) CC BY 4.0 remove the figure or B) supply a replacement figure that complies with the CC BY 4.0 license. Please check copyright information on all replacement figures and update the figure caption with source information.The following resources for replacing copyrighted map figures may be helpful:http://viewer.nationalmap.gov/viewer/)USGS National Map Viewer (http://eros.usgs.gov/#)USGS Earth Resources Observatory and Science (EROS) Center (https://eol.jsc.nasa.gov/)The Gateway to Astronaut Photography of Earth (https://www.cia.gov/library/publications/the-world-factbook/docs/refmaps.html)Maps at the CIA (http://earthobservatory.nasa.gov/)NASA Earth Observatory (http://landsat.visibleearth.nasa.gov/)Landsat Natural Earth . We thank the editor for this comment. A) Yes, the usmap package was used for both figure 2 and 3. We have removed all images generated with the usmap package and have replaced the images with those acquired from the recommended Natural Earth (_________________________________________________________________________Editor Comments 3 (07/15/2022):Comment 1:https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: Response 1:doi.org/10.17504/protocols.io.x54v9yqz4g3e/v1 (Private link for reviewers: https://www.protocols.io/private/3136D4A315E611ED832E0A58A9FEAC02 to be removed before publication.). We will release the protocol publicly after the manuscript is accepted for publication.We thank the editor for this recommendation. We have submitted the protocol to protocols.io with the following DOI: dx._________________________________________________________________________Comment 2:1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdfResponse 2:We thank the editor for this comment. The manuscript has been updated to meet PLOS ONE style requirements. _________________________________________________________________________Comment 3:http://journals.plos.org/plosone/s/licenses-and-copyright.2. We note that Figure 2 in your submission contain map images which may be copyrighted. All PLOS content is published under the Creative Commons Attribution License (CC BY 4.0), which means that the manuscript, images, and Supporting Information files will be freely available online, and any third party is permitted to access, download, copy, distribute, and use these materials in any way, even commercially, with proper attribution. For these reasons, we cannot publish previously copyrighted maps or satellite images created using proprietary data, such as Google software . For more information, see our copyright guidelines: We require you to either (1) present written permission from the copyright holder to publish these figures specifically under the CC BY 4.0 license, or (2) remove the figures from your submission:a. You may seek permission from the original copyright holder of Figure 2 to publish the content specifically under the CC BY 4.0 license. http://journals.plos.org/plosone/s/file?id=7c09/content-permission-form.pdf) and the following text:We recommend that you contact the original copyright holder with the Content Permission Form (http://creativecommons.org/licenses/by/4.0/). Please be aware that this license allows unrestricted use and distribution, even commercially, by third parties. Please reply and provide explicit written permission to publish XXX under a CC BY license and complete the attached form.\u201d\u201cI request permission for the open-access journal PLOS ONE to publish XXX under the Creative Commons Attribution License (CCAL) CC BY 4.0 remove the figure or ii) supply a replacement figure that complies with the CC BY 4.0 license. Please check copyright information on all replacement figures and update the figure caption with source information. If applicable, please specify in the figure caption text when a figure is similar but not identical to the original image and is therefore for illustrative purposes only.The following resources for replacing copyrighted map figures may be helpful:http://viewer.nationalmap.gov/viewer/USGS National Map Viewer (public domain): http://eol.jsc.nasa.gov/sseop/clickmap/The Gateway to Astronaut Photography of Earth (public domain): https://www.cia.gov/library/publications/the-world-factbook/index.html and https://www.cia.gov/library/publications/cia-maps-publications/index.htmlMaps at the CIA (public domain): http://earthobservatory.nasa.gov/NASA Earth Observatory (public domain): http://landsat.visibleearth.nasa.gov/Landsat: http://eros.usgs.gov/#USGS EROS (Earth Resources Observatory and Science (EROS) Center) (public domain): http://www.naturalearthdata.com/Natural Earth (public domain): Response 3:We thank the editor for this comment and recommendations. We have opted to replace the image and have produced the images ourselves using open-source R with usmap and ggplot2 packages._________________________________________________________________________Comment 4:3. Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article\u2019s retracted status in the References list and also include a citation and full reference for the retraction notice.Response 4:We thank the editor for this comment. All references have been either confirmed or updated.github.com/cov-lineages/pangolin6. O\u2019Toole \u00c1, Scher E, Underwood A, Jackson B, Hill V, McCrone J, et al. pangolin: lineage assignment in an emerging pandemic as an epidemiological tool. In: PANGO lineages [Internet]. 2021 [cited 11 Mar 2021]. Available: Has been published since our original submission and has been replaced with:6. O\u2019Toole \u00c1, Scher E, Underwood A, Jackson B, Hill V, McCrone JT, et al. Assignment of epidemiological lineages in an emerging pandemic using the pangolin tool. Virus Evolution. 2021;7: veab064. doi:10.1093/ve/veab064https://www.R-project.org/16. R Core Team. R: A language and environment for statistical ## computing. [Internet]. Vienna, Austria: R Foundation for Statistical Computing; 2020. Available from: Has been updated to: https://www.R-project.org/16. R Core Team. R: A language and environment for statistical computing. Vienna, Austria: R Foundation for Statistical Computing; 2020. Available: 17. Wickham H. ggplot2: Elegant Graphics for Data Analysis. 2nd ed. 2016. Cham: Springer International Publishing\u202f: Imprint: Springer; 2016. 1 p. (Use R!).Has been updated to:https://ggplot2.tidyverse.org17. Wickham H. ggplot2: Elegant Graphics for Data Analysis. Springer-Verlag New York; 2016. Available from: 28. Scobie H. Update on Emerging SARS-CoV-2 Variants and Vaccine Considerations. 2021 May 12;30.Has been updated to:https://www.cdc.gov/vaccines/acip/meetings/downloads/slides-2021-05-12/10-COVID-Scobie-508.pdf28. Scobie H. Update on Emerging SARS-CoV-2 Variants and Vaccine Considerations. 2021 May 12. Available from: 33. Jangra S, Ye C, Rathnasinghe R, Stadlbauer D, PVI study group, Krammer F, et al. The E484K mutation in the SARS-CoV-2 spike protein reduces but does not abolish neutralizing activity of human convalescent and post-vaccination sera. Infectious Diseases (except HIV/AIDS); 2021 Jan. doi:10.1101/2021.01.26.21250543Has been published since our original submission and has been replaced with:33. Jangra S, Ye C, Rathnasinghe R, Stadlbauer D, Personalized Virology Initiative study group, Krammer F, et al. SARS-CoV-2 spike E484K mutation reduces antibody neutralisation. Lancet Microbe. 2021;2: e283\u2013e284. doi:10.1016/S2666-5247(21)00068-935. Deng X, Garcia-Knight MA, Khalid MM, Servellita V, Wang C, Morris MK, et al. Transmission, infectivity, and antibody neutralization of an emerging SARS-CoV-2 variant in California carrying a L452R spike protein mutation. medRxiv. 2021; 2021.03.07.21252647. doi:10.1101/2021.03.07.21252647Has been published since our original submission and has been replaced with:35. Deng X, Garcia-Knight MA, Khalid MM, Servellita V, Wang C, Morris MK, et al. Transmission, infectivity, and neutralization of a spike L452R SARS-CoV-2 variant. Cell. 2021;184: 3426-3437.e8. doi:10.1016/j.cell.2021.04.02536. Li Q, Wu J, Nie J, Zhang L, Hao H, Liu S, et al. The Impact of Mutations in SARS-CoV-2 Spike on Viral Infectivity and Antigenicity. Cell (Cambridge). 2020;182: 1284-1294.e9. doi:10.1016/j.cell.2020.07.012Has been updated to:36. Li Q, Wu J, Nie J, Zhang L, Hao H, Liu S, et al. The Impact of Mutations in SARS-CoV-2 Spike on Viral Infectivity and Antigenicity. Cell. 2020;182: 1284-1294.e9. doi:10.1016/j.cell.2020.07.01238. Arag\u00f3n TJ, Newsom G. California Department of Public Health - Health Alert: Concerns re: the Use of Bamlanivimab Monotherapy in the Setting of SARS-CoV2 Variants. 2021; 4.Has been updated to:http://publichealth.lacounty.gov/eprp/lahan/alerts/CAHANBamlanivimabandSARSCoV2Variants.pdf38. Arag\u00f3n TJ, Newsom G. California Department of Public Health - Health Alert: Concerns re: the Use of Bamlanivimab Monotherapy in the Setting of SARS-CoV2 Variants. 2021. Available from: The following has been removed due to being revoked:39. Moruf A. Fact Sheet For Health Care Providers Emergency Use Authorization (Eua) Of Bamlanivimab. 2021; 26._________________________________________________________________________[Note: HTML markup is below. Please do not edit.]Reviewers' comments:Reviewer's Responses to QuestionsComments to the Author1. Is the manuscript technically sound, and do the data support the conclusions?The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.Reviewer #1: YesReviewer #2: Yes2. Has the statistical analysis been performed appropriately and rigorously?Reviewer #1: YesReviewer #2: Yes3. Have the authors made all data underlying the findings in their manuscript fully available?The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data\u2014e.g. participant privacy or use of data from a third party\u2014those must be specified.Reviewer #1: YesReviewer #2: Yes4. Is the manuscript presented in an intelligible fashion and written in standard English?PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.Reviewer #1: YesReviewer #2: Yes5. Review Comments to the AuthorPlease use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. _________________________________________________________________________Reviewer 1:Comment 1:Reviewer #1: This paper focuses on analyzing the VOCs that have been found in Hawai\u00ed and performs an analysis of the VOC variants to determine their point of origin. The authors also analyze the case numbers during quarantine and post quarantine in an attempt to demonstrate the efficacy of quarantine in delaying the entry of VOC to Hawai\u00ed, which was, (unsurprisingly) confirmed through analysis and comparison with Utah.The strength of the paper is the thoroughness of the analysis of the genomic data available for the VOCs in Hawaii and the comparison to the VOCs worldwide from banked genomic data. The methods and the analysis of the genomic data is very well presented and explained.Response 1:We thank the reviewer for these comments._________________________________________________________________________Comment 2:What I feel the authors could improve is the background information to help the general reader better understand the significance and impact of the data presented. In particular, I would suggest that the authors rewrite the introduction to describe the general epidemiological trends of COVID infection in Hawai\u00ed, and also provide more details of what is meant by \u2018quarantine\u2019 as this has different guidelines in different countries. This would then provide the readers with a better background heading into the core findings and be able to better appreciate the findings. In the intro line 52 to 63 reads like content better suited to the discussion than the introduction?Response 2:We thank the reviewer for these comments. In the revised manuscript, lines 52 to 63 have been moved to the discussion and we have addressed the remaining comments as follows:\u201cHawaii has experienced unique epidemics within the coronavirus disease 2019 (COVID-19) pandemic, in that Pacific Islanders, which account for 4% of the population, once accounted for nearly 30% of COVID-19 cases.(1) Further, the Japanese population of Hawaii currently accounts for 6% of the population and experiences 15% of COVID-19 cases. White persons, in contrast, account for 37% of the population and 25% of the cases.(2) As such, a heightened need exists to understand SARS-CoV-2 introduction into Hawaii and the effect of public policy measures. Early in the pandemic, in an attempt to control the spread of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), Hawaii, like 42 other states in the United States, implemented a quarantine defined by \u201cStay-at-Home\u201d orders. State-at-Home orders directed residents to stay inside homes except for essential needs and closed operations of non-essential businesses.(3) In addition to this public policy, more than 22,300 SARS-CoV-2 sequences submitted to GISAID and GenBank originate from Hawaii to facilitate further studies.\u201d_________________________________________________________________________Comment 3:I would also suggest that in terms of the discussion there are a few other points the authors may wish to briefly mention in the writing and discussion \u2013 it is of course of great epidemiological significance to identify the source of infection to understand the pattern of infection and global spread, however I would argue that the authors assertions (line 467) that the source of infection must be ascertained before steps can be taken may be overstating the case as by that time the case is already in, and it may be more appropriate to argue that understanding the origin of cases may be a reason to look at the processes in that country or in the infection control measures in place in that country for review? Response 3:We thank the reviewer for this discussion and argument. We have made the statement less assertive and included the reasoning provided in this comment in the revised manuscript as follows: \u201cPolicy-makers should first ascertain the source of the spread before they can control and limit the spread of future VOC. By understanding the source responsible for the highest number of cases, policy-makers can look at interactions between that area and the host area, the policies in that area, identify the reasons for the spread, and address those reasons with appropriate measures both in the present and in future COVID-19 waves.\u201d_________________________________________________________________________Comment 4:With regard to line 367 where the authors have highlighted that the highest number came from California I would suggest inferring some suggestions as to why \u2013 did California have different regulations on COVID control? Or was it because more people entering Hawai\u00ed were from California? Ideas about this then provide more guidance to public health measures at appropriate points in the chain of transmission. In addition, while limiting case numbers is of paramount concern, economic and social considerations also are a factor in deciding on the measure to implement - meaning the data and statistics here are a key consideration, but they are not the only ones.Response 4:We thank the reviewer for this comment and suggestion. We have addressed this in the revised manuscript as follows:In 2020, 27% of all travelers to Hawai\u2019i originated from California, with 53% coming from the West Coast. Further, Hawai\u2019i residents traveled to the West Coast, specifically Las Vegas, Nevada. However, the following is additional information:\u201cFrom the analysis of the SARS-CoV-2 sequence data, a policy-maker could reasonably consider focusing on additional screening, contact tracing, and quarantine efforts among visitors and residents arriving from and traveling to the West Coast of the continental United States. There are several possible reasons for this vast majority of SARS-COV-2 influx from the US West Coast. In 2020, 27% of all travelers to Hawaii originated from California, with 53% coming from the West Coast. California's biggest domestic traveling demographic is in-state travel, meaning that the state likely spreads SARS-CoV-2 efficiently and uniformly within California.(44) Second, Hawaii residents traveling to the West Coast and returning home once infected with the virus. The first case of COVID-19 in Hawaii and the first case of the Delta variant were brought to Hawaii by residents (both vaccinated and unvaccinated) returning from travel .(45\u201347) Additionally, 62% of early cases in Hawaii were in either visitors to Hawaii or returning residents.(47) There are presumably additional factors that participated in the 76% of SARS-CoV-2 VOC attributable to California. Regardless, policymakers must evaluate these possible collective factors and social and economic implications together to determine the appropriate public-policy action.\u201d _________________________________________________________________________Reviewer 2:Reviewer #2: dear author,Comment 1:this paper is much appreciable and it gives the origin and spread of variants of SARS Cov-2 in different areas.Response 1:We thank the reviewer for this comment. _________________________________________________________________________Comment 2:methods should have been simplified with flowchart or something. then it would be easy to reproduce by some other .Response 2:We thank the reviewer for this comment and have added a flowchart as a figure and have uploaded the method to protocols.io. _________________________________________________________________________AttachmentRebuttal_09.26.2022.docxSubmitted filename: Click here for additional data file. 15 Nov 2022Genomic Analysis of SARS-CoV-2 Variants of Concern Circulating in Hawai\u2019i to Facilitate Public-Health PoliciesPONE-D-21-19856R1Dear Dr. Maison,We\u2019re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.Within one week, you\u2019ll receive an e-mail detailing the required amendments. When these have been addressed, you\u2019ll receive a formal acceptance letter and your manuscript will be scheduled for publication.http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at onepress@plos.org.If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they\u2019ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact Kind regards,Ming ZhangAcademic EditorPLOS ONEAdditional Editor Comments :Reviewers' comments: 21 Nov 2022PONE-D-21-19856R1 Genomic Analysis of SARS-CoV-2 Variants of Concern Circulating in Hawai\u2019i to Facilitate Public-Health Policies Dear Dr. Maison:I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department. onepress@plos.org.If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact plosone@plos.org. If we can help with anything else, please email us at Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staffon behalf ofDr. Ming Zhang Academic EditorPLOS ONE"} +{"text": "Chromatin loops are an essential factor in the structural organization of the genome; however, their detection in Hi-C interaction matrices is a challenging and compute-intensive task. The approach presented here, integrated into the HiCExplorer software, shows a chromatin loop detection algorithm that applies a strict candidate selection based on continuous negative binomial distributions and performs a Wilcoxon rank-sum test to detect enriched Hi-C interactions.HiCExplorer\u2019s loop detection has a high detection rate and accuracy. It is the fastest available CPU implementation and utilizes all threads offered by modern multicore platforms.in situ Hi-C data contain a large amount of noise; achieving better agreement between loop calling algorithms will require cleaner Hi-C data and therefore future improvements to the experimental methods that generate the data.HiCExplorer\u2019s method to detect loops by using a continuous negative binomial function combined with the donut approach from HiCCUPS leads to reliable and fast computation of loops. All the loop-calling algorithms investigated provide differing results, which intersect by HiCCUPS is part of the software Juicer,1 and the implementation requires a general-purpose GPU (GPGPU), which imposes a barrier for users without access to Nvidia GPUs. However, an experimental CPU-based implementation has also been released. Algorithms such as iterative correction and eigenvector decomposition (ICE) ; German Federal Ministry of Education and Research [031 L0101C de.NBI-epi awarded to B.G.]. R.B. was supported by the German Research Foundation (DFG) under Germany\u2019s Excellence Strategy (CIBSS\u2013EXC-2189\u2013Project ID 390939984). We acknowledge support by the Open Access Publication Fund of the University of Freiburg for contributing to the publication fees.J.W. designed and implemented the presented algorithm and wrote the manuscript. R.B. contributed to the manuscript. B.G. contributed to the manuscript.giac061_GIGA-D-21-00069_Original_SubmissionClick here for additional data file.giac061_GIGA-D-21-00069_Revision_1Click here for additional data file.giac061_GIGA-D-21-00069_Revision_2Click here for additional data file.giac061_GIGA-D-21-00069_Revision_3Click here for additional data file.giac061_GIGA-D-21-00069_Revision_4Click here for additional data file.giac061_Response_to_Reviewer_Comments_Original_SubmissionClick here for additional data file.giac061_Response_to_Reviewer_Comments_Revision_1Click here for additional data file.giac061_Response_to_Reviewer_Comments_Revision_2Click here for additional data file.giac061_Reviewer_1_Report_Original_SubmissionBorbala Mifsud -- 3/22/2021 ReviewedClick here for additional data file.giac061_Reviewer_1_Report_Revision_1Borbala Mifsud -- 7/5/2021 ReviewedClick here for additional data file.giac061_Reviewer_2_Report_Original_SubmissionFeng Yue -- 4/4/2021 ReviewedClick here for additional data file.giac061_Reviewer_2_Report_Revision_1Feng Yue -- 7/19/2021 ReviewedClick here for additional data file.giac061_Reviewer_3_Report_Revision_2Aleksandra P\u00c4\u2122kowska -- 11/23/2021 ReviewedClick here for additional data file.giac061_Supplemental_FilesClick here for additional data file."} +{"text": "That is, how many changes and options should a sequence script allow before the more efficient choice is to branch out into multiple scripts.\u2022Efforts in Accessible MRI can use the data, acquired using open-source sequences on major commercial scanners, as a reference point for experiments using the same sequences on new hardware.1See 1.11.1.1In the main folder (developer_main_site/IRSE), the sequence implementation is presented in a Jupyter notebook (write_irse_interleaved_split_grad.ipynb). In addition, a developer PDF form (IRSE_DEV_QUALITATIVE.pdf) includes information on the test experiment including sequence parameters, hardware setup, example image, image quality measures, and safety metrics.x'' and \u201cky\u201d. Reconstructed images are included as figures .The simulation information (developer_main_site/IRSE/sim) provides phantom models where the map dimensions are the \u201cx\u201d and \u201cy'' spatial indices and simulated k-space signals with dimensions \u201ckThe acquisition folder (developer_main_site/IRSE/acq) includes the tested sequence file (irse_pypulseq_colab_256_TI150_neg4.4.seq), the randomized slice order (irse_sl_order.mat), and raw data from two repetitions from the same sequence where the data dimensions are \u201csamples\u201d, \u201cchannels\u201d, and \u201creadouts\u201d, in that order. In addition, the acquisition details are listed in a sheet (irse_acq_info.xlsx).In the reconstruction folder (developer_main_site/IRSE/recon), we include the reconstruction script (reconstruct_images.m) as well as the reconstructed images (images_combined.mat) and montage figure (irse_pulseq_montage.png). Image quality metrics (irse_metrics.mat) and ACR test results (ACR_METRICS_IRSE.xlsx) 1.1.2Similar to 1.1.1, we present in the main folder (developer_main_site/TSE) the sequence implementation (write_tse.ipynb) and the developer PDF form (TSE_DEV_QUALITATIVE.pdf). The simulation folder (developer_main_site/TSE/sim) includes equivalent phantoms , simulated images and figures . In addition to the sequence file (tse_ms_TR3000ms_TE50ms_4echoes.seq), the raw data , and the acquisition details (tse_acq_info.xlsx), the acquisition folder (developer_main_site/TSE/acq) also includes the multi-echo phase encoding order (tse_pe_info.mat) with dimensions \u201cnumber of echoes\u201d and \u201cnumber of excitations\u201d. Lastly, the reconstruction folder (developer_main_site/TSE/recon) covers the equivalent files as in 1.1.1 .1.1.31 map and the acquisition details (irse_t1_acq_info.xlsx). Raw data is provided as ten separate files at different Tis .The sequence implementation (write_irse_interleaved_split_grad.ipynb) and documentation PDF (IRSE_DEV_QUANTITATIVE.pdf) are provided like above. Acquisition data (developer_main_site/IRSE_T1_mapping/acq) includes ten sequence files with different inversion times see used to The reconstruction folder (developer_main_site/IRSE_T1_mapping/recon) includes a reconstruction script (reconstruct_T1_mapping_images.m), the original and low-pass filtered reconstructed images , and the montage of all ten filtered images (T1_mapping_images_filtered_montage.png).1 mapping MATLAB code: the function for creating a regression curve following the T1 signal model (createFitT1.m) and the script that takes in filtered images and generates a T1 map (T1Mapping.m). The output map is given in units of seconds in two versions, generated from either the original or the filtered T1 images . The filtered image is shown in a figure after adjusting the display to eliminate the phantom background (T1_map_cleaned.png). Region-Of-Interest (ROI) values for the individual T1 spheres are given in a separate file (t1_by_sphere_filtered.mat).The analysis data (developer_main_site/IRSE_T1_mapping/ana) includes the T1.1.42 mapping (tse_multiecho_NIST_t2_sag.seq), the acquisition details (tse_t2_acq_info.xlsx), and the raw data (T2_mapping_raw_data.mat). In the reconstruction folder, equivalent script, data, and figure are provided of the 23 T2 mapping images, each reconstructed from samples at the same echo number and therefore the same TE . The analysis folder contains analogous mapping scripts , two T2 maps generated from original and filtered data where the latter uses only the 5th to 23rd echoes to best capture the most T2 values imaged, the cleaned T2 map figure (T2_map_cleaned.png), and the ROI statistics (T2_by_sphere_filtered.mat).Similar documentation (TSE_DEV_QUANTITATIVE.pdf) and sequence notebook (write_tse_t2_mapping.ipynb) are included in the main folder (developer_main_site/TSE_T2_mapping). Acquisition data includes the single multi-echo TSE sequence with 23 echoes for variable TE T1.21.2.1The \u201cIRSE_ACR\u201d folder contains the raw data (raw_data_irse_second_site.mat), the image montage (IRSE_images_second_site.png), and the filled user form documenting the steps performed and user feedback (IRSE_USER_QUALITATIVE.pdf).1.2.2The \u201cTSE_ACR\u201d folder contains the equivalent raw data (raw_data_tse_second_site.mat), image figure (TSE_images_second_site.png), and user form (TSE_USER_QUALITATIVE.pdf). It also provides the phase encoding order (pe_info.mat).1.2.31 map as well as the user form (IRSE_USER_QUANTITATIVE.pdf) are included.Eight raw data files are provided for the different TIs . The reconstructed images and T1.2.4Similarly, the raw data (TSE_T2mapping_t2sph_second_site.mat) and reconstructed images (T2_mapping_images_second_site.mat) are included, in addition to the map figure (T2map_second_site.png) and the user form (TSE_USER_QUANTITATIVE.pdf).1.3The folder (documentation_templates) contains the empty developer and user forms for documenting test experiments in the proposed sequence validation framework 22.1Two classic MRI sequences were implemented in the multi-vendor, open-source Pulseq format 2.21/T2 relaxation times, that make up a numerical phantom. When exposed to temporally and spatially varying magnetic fields such as those defined by a pulse sequence program, the isochromat's magnetization vector evolves according to its initial condition, innate parameters, and the external driving fields. A numerical library is used to solve the differential equations. In the end, the detectable signals from the transverse magnetization are added up across all isochromats in a phantom to generate the raw MRI signal.Lower resolution numerical simulation based on the Bloch equations All simulations were performed on a Windows 10 operating system with an Intel(R) Core i7\u20138650\u00a0U CPU. Specific parameters used for the simulation were: FOV\u00a0=\u00a0250\u00a0mm, slice thickness\u00a0=\u00a05\u00a0mm; TR\u00a0=\u00a04500\u00a0ms, TI\u00a0=\u00a0200\u00a0ms, and TE\u00a0=\u00a010\u00a0ms for IRSE; TR\u00a0=\u00a04500\u00a0ms and TE\u00a0=\u00a010\u00a0ms for TSE.2.31/T2 spheres.Experiment parameters are shown in 2.41: 2: 1/T2 maps, small sphere-wise ROIs were manually selected to ensure only interior voxels are used. For each ROI, the mean and standard deviation of T1/T2 values were computed.For each mapping experiment, the corresponding signal equation and Structural Similarity Index Measure (SSIM) 2.5The open-source sar4seq library was used to generate predicted time-averaged RF power and Specific Absorption Rate (SAR) for each sequence ,14. At aOur work did not involve human or other animal subjects and adheres to ethics in publishing standards.Gehua Tong: Formal analysis, Investigation, Methodology, Software, Visualization, Writing \u2013 original draft, Writing \u2013 review & editing. Andreia S. Gaspar: Investigation, Validation, Visualization. Enlin Qian: Methodology, Software. Keerthi Sravan Ravi: Software. John Thomas Vaughan: Writing \u2013 review & editing. Rita G. Nunes: Funding acquisition, Project administration, Resources, Supervision, Validation, Visualization. Sairam Geethanath: Conceptualization, Funding acquisition, Methodology, Project administration, Resources, Software, Supervision, Writing \u2013 review & editing.The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper."} +{"text": "This article provides a pooled cross-sectional sample of Chilean households from 4 survey waves . The data has information on the demographics of the household, labor participation and occupation, savings rates, plus wealth of different sources. The data is available in both Excel and Stata formats. It is an important data for the study of savings, wages, pensions and wealth inequality. The dataset consists of demographics , labor market information , savings rates, and expected wealth components . The dataset includes 33,538 households from the 1997, 2007, 2012 and 2017 waves of the Chilean Family Expenditures Survey. The variables include Household identifier variables and population weights, Demographic variables , Work and income variables, Savings rates and consumption flows variables, Ratios of household wealth as a fraction of permanent household income, Betas for the linear correlation between unemployment risk and income volatility of the different 538 worker types with the aggregate consumption kernel pricing returns and the pension fund returns.Household identifier variables and population weights \u2013hogar \u201chousehold identifier of each EPF wave\u201dfolio_hogar \u201chousehold identifier for the pooled cross-section of all the EPF waves\u201dyear \u201cYear of the EPF Survey wave\u201dfactor_all \u201cexpansion factor (population weight) of the household in the survey\u201did \u201cgroup cluster identifier\u201dDemographic variables \u2013sexo \u201cGender of the household head \u201dedad \u201cage (in years) of the household head\u201deduc \u201ceducation: elementary, secondary, university\u201deduc_ecf \u201cEducation level of the respondent (only 2017 wave)\u201d, with values 1 \u201cElementary education\u201d 2 \u201cSecondary education\u201d 3 \u201cTechnical or Some college\u201d 4 \u201cCollege education\u201d 5 \u201cPost-graduate education\u201docup_female_spouse \u201cfemale partner of the household is employed\u201dcouple_d \u201chousehold has a couple among its members\u201dd_child \u201cdummy for whether the household has a child\u201dnum_sen \u201cdummy for whether the household has a senior citizen (above age 65) among its members\u201dWork and income variables \u2013ILFP \u201cdummy for whether the main income of the household comes from informal employment\u201ddummy_region \u201cdummy for whether the household lives in regions outside of the Metropolitan Capital region\u201dquintile_h \u201chousehold national income quintile\u201dytoth \u201clog of the total household permanent income (monthly)\u201dsd_ln_inc_sect \u201cannual standard deviation of the household labor income\u201dunemp_sect \u201cunemployment risk of the household\u201dSavings rates and consumption flows variables \u2013CBeta \u201cfraction of wealth that should be consumed each year in a standard life cycle model\u201dSRate \u201cratio of the current saving rate in terms of the permanent income\u201dSRatePI \u201cratio of the permanent saving rate in terms of the permanent income\u201daggSRate \u201cratio of the total current saving rate in terms of the permanent income\u201daggSRatePI \u201cratio of the total permanent saving rate in terms of the permanent income\u201dRatios of household wealth as a fraction of permanent household income \u2013Rytoth_c \u201cHousehold income surprise\u201dR_TotalWI_hh \u201cDiscounted total wealth\u201dR_PW2I_hh \u201cDiscounted total pension wealth\u201dR_FE_hh \u201cDiscounted labor earnings wealth\u201dR_PW2I_hh_NoSy \u201cDiscounted contributory pension wealth\u201dR_PW2I_APS \u201cDiscounted solidarity pension wealth\u201dR_PWI_hh_past \u201cDiscounted current contributory pension wealth\u201dR_PWI_hh_NoSy \u201cDiscounted contributory pension wealth\u201dR_FENL_hh \u201cDiscounted non labor earnings wealth\u201dR_FErent \u201cDiscounted rent wealth\u201dR_FEtransfers \u201cDiscounted transfers wealth\u201dR_FEfinassets \u201cDiscounted financial income wealth\u201dBetas for the linear correlation between unemployment risk and income volatility of the different 538 worker types with the aggregate consumption kernel pricing returns and the pension fund returns \u2013BetaPF_unemployed \u201cBeta between the occupational unemployment with the Pension Fund real rate of return\u201dBetaPF_sd_ln_ing_tot_ocup3 \u201cBeta between the occupational income volatility with the Pension Fund real rate of return\u201dBeta_unemployed \u201cBeta between the occupational unemployment with the Consumption Pricing Kernel real rate of return\u201dBeta_sd_ln_ing_tot_ocup3 \u201cBeta between the occupational income volatility with the Consumption Pricing Kernel real rate of return\u201dThis is the list of variables available in the dataset:2The data consists of demographics, labor earnings and risk and a simulation of the future contributory pension wealth plus public solidarity benefits for a sample of Chilean households Madeira . The modEncuesta de Presupuestos Familiares, hence on EPF) between 1997 until 2017. The dynamics of labor force participation, formal versus informal work and unemployment are calibrated from the Chilean Employment Survey , according to 538 workers\u2019 types which are obtained from the multivariate vector of the workers\u2019 sex, age, education, industry and region the raw data of all the EPF and ENE surveys from the website of the Chilean Institute of National Statistics.https://www.ine.cl/estadisticas/sociales/mercado-laboral/ocupacion-y-desocupacion.ENE: https://www.ine.cl/estadisticas/sociales/ingresos-y-gastos/encuesta-de-presupuestos-familiares.EPF:The applied model that was calibrated from the raw data is explained in detail in the online file \u201cMethodology.pdf\u201d. The codes used to create the variables are explained in detail in the file README_JIMF_Codes_Summary.docx and CODES_JIMF.zip includes all the 45 Stata software codes used in the article. These files are publicly available with the data in the repository Mendeley Data.The online file in Mendeley Data CODES_JIMF.zip includes all the software codes with detailed comments on the methods used inside each code. Here I provide a brief summary of those codes. The \u201cM_EPF_analysis.do\u201d do file replicates the analysis of the article, by calling all the algorithms and doing each code in sequenced steps until all the data formatting and analysis is completed.The codes pctile_wgts.do, mean_wgts.do, and linear_reg_impute3.do create conditional group percentiles, mean values and imputations for missing values in the micro survey data.A second set of codes formats the Income and Employment Survey creating unemployment risk and income volatility statistics for 538 worker types for the period 1990 until 2017, with worker types given by gender, education, region, industry of occupation, age, income quintile. These codes include: esi_format.do , panel_esi_allyrs_FLP.do (formats rotating samples between 2 years for the ESI workers in the labor force), panel_esi_ILFP.do , panel_esi_income_growth0.do , layoff_jobfind0.do , income_shock0.do , p_income.do , Consumption_WageVolatility.do : EPF_2017.do, EPF_2017_DurSDurNDur_Tot.do, format_epf_1997.do, format_epf_2007.do, format_epf_2012.do, format_epf_2017.do // It formats the EPF 2017 data with the same variables and formats of other years. The code format_epf_all.do joins all the EPF waves.A fourth set of codes joins the EPF data with the Employment and Income Survey worker type statistics for each of the 538 worker types across survey waves. The codes then estimate past and expected future pension contributions for each worker. This set of codes includes: EPF_labor_risk_vintage.do, EPF_all_LFP_ILFP_FE_PW_PWpast.do, income_potential.do, import_FE_PW_PW_past.do, generate_FE_FENL_PW_PWpast_TW_hh.do, generate_log_Wealth.do, Pension_tope_income.do.A fifth set of codes calibrates the pension system parameters for Chile in previous years, the pension withdrawals , the current policy reforms in 2022, and the counterfactual scenarios for the future reforms. This set of codes includes: Ingreso_bruto.do, Pension_income.do, Pension_PBS.do, Pension_PGU.do, Pension_PBS_2019.do, Pension_PBS_2008.do, Pension_PBS_PASIS.do, Pension_PASIS.do, Pension_Contr_APS_total.do, Pension_Reparto.do, Pension_Future.do, Retiro_AFP.do, PensionReformsFormat.do, predictLS_old_new1.do.The sixth set of codes analyses the data and providing the results in Madeira https://data.mendeley.com/datasets/dyp8yr2sr2/1.All the methods (in Stata do-files), theoretical methodology, and the datasets are published online with the Mendeley Data Madeira : https:/Carlos Madeira: Conceptualization, Methodology, Software, Validation, Formal analysis, Investigation, Resources, Data curation, Writing \u2013 original draft, Writing \u2013 review & editing, Visualization, Supervision, Project administration, Funding acquisition.The author declares that he has no known competing financial interests or personal relationships which have or could be perceived to have influenced the work reported in this article. I received no funding from any institution besides my employer which is the Central Bank of Chile. Furthermore, there are no patents or impediments to publication, including the timing of publication, with respect to the intellectual property of the article or the associated dataset."} +{"text": "Reproducibility of liquid chromatography separation is limited by retention time drift. As a result, measured signals lack correspondence over replicates of the liquid chromatography\u2013mass spectrometry (LC-MS) experiments. Correction of these errors is named retention time alignment and needs to be performed before further quantitative analysis. Despite the availability of numerous alignment algorithms, their accuracy is limited .We present the Alignstein, an algorithm for LC-MS retention time alignment. It correctly finds correspondence even for swapped signals. To achieve this, we implemented the generalization of the Wasserstein distance to compare multidimensional features without any reduction of the information or dimension of the analyzed data. Moreover, Alignstein by design requires neither a reference sample nor prior signal identification. We validate the algorithm on publicly available benchmark datasets obtaining competitive results. Finally, we show that it can detect the information contained in the tandem mass spectrum by the spatial properties of chromatograms.https://github.com/grzsko/Alignstein.We show that the use of optimal transport effectively overcomes the limitations of existing algorithms for statistical analysis of mass spectrometry datasets. The algorithm\u2019s source code is available at Advances in liquid chromatography\u2013mass spectrometry (LC-MS) have provided a remarkable insight into the functioning of the organisms, ranging from protein level\u00a0, throughOne of these challenges is the correction of errors caused by retention time (RT) drift. It limits the reproducibility of LC separation, which is important for experiments usually acquired in many (even hundreds) replicates. RT drift became a significant obstacle with the emergence of high-performance chromatography (HPLC) and ultra-performance chromatography (UPLC) technologies. For example, nanoflow UPLC column separation takes a relatively long time, usually up to several hours. For these experiments, the elution time of peptides may vary up to 5 minutes\u00a0 or even RT drift can be corrected by the experimental protocol only to a limited extent\u00a0. It may RT drift requires a correction, usually named the RT alignment. It results in the correspondence of signals across runs\u00a0. For exaHere, we present a novel alignment algorithm named Alignstein cf. Fig.\u00a0. It findThis article is organized as follows. First, we characterize Alignstein and analyze how it deals with the swapped signals. Then, we validate the algorithm on publicly available benchmark datasets. Finally, we show the applicability of our approach to detecting corresponding biomarkers in differing samples.RT drift may swap the order of eluting analytes. In the proteomic experiment (cf. Methods), we analyzed that about 3% of all feature pairs are swapped between two chromatograms. Although many of the available algorithms properly align most signals, still they fail to resolve swaps.Most approaches to RT alignment are so-called warping algorithms\u2014for example, OpenMS\u00a0, MetAligm/z and average RT value, ignoring the information of isotopic envelope or feature span over the RT dimension. Without feature spatial characteristics and information of coeluting ions, elution order swaps are practically undetectable\u00a0, including replicates of a sample with 0\u2009\u00b5g/L BaP. For better readability, outliers over 200 seconds are omitted. Most RT differences are not greater than 10 seconds.Supplementary Fig.\u00a0S3. Flow network for finding the optimal feature matching between n features of 1 chromatogram denoted by nodes L1, \u2026, nL and m features from the other chromatogram, denoted by nodes R1, \u2026, mR. Nonzero costs are described by edge labels. The cost between features iL and features jR is equal to the GWD between them. Additional node rT (\u201ctrash\u201d) gives the possibility to not match the feature with cost c. Every edge has capacity equal to 1, except edge between S (source) and rT and edge between rT and T (sink) with capacities equal to max\u2009{0, s \u2212 n} and max\u2009{0, n \u2212 s}, respectively . As a result, we take all matchings .Supplementary Table S1. Detailed results for P1 set in CAAP comparison. P stands for alignment precision, R stands for alignment recall, and F stands for F-score.Supplementary Table S2. Detailed results for P2 set in CAAP comparison. P stands for alignment precision, R stands for alignment recall, and F stands for F-score.m/z: mass-to-charge ratio; RT: retention time; UPLC: ultra-performance liquid chromatography.BaP: benzo[a]pyrene; CAAP: Critical Assessment of Alignment Procedures; C60: fullerene; CID: collision-induced dissociation; DDA: data-dependent acquisition; GWD: generalized Wasserstein distance; HPLC: high-performance liquid chromatography; IR: identification recall; LC-MS: liquid chromatography\u2013mass spectrometry; MS/MS: tandem mass spectrometry; The authors declare that they have no competing interests.G.S. was supported by Polish National Science Center grant number 2019/33/N/ST6/02949. A.G. and B.M. were supported by Polish National Science Center grant number 2018/29/B/ST6/00681.G.S. implemented and verified the algorithm. A.G. conceived the idea of the project and discussed the results. B.M. designed the algorithm and supervised the work. G.S., A.G., and B.M. cowrote the manuscript.giac101_GIGA-D-22-00149_Original_SubmissionClick here for additional data file.giac101_GIGA-D-22-00149_Revision_1Click here for additional data file.giac101_GIGA-D-22-00149_Revision_2Click here for additional data file.giac101_Response_to_Reviewer_Comments_Original_SubmissionClick here for additional data file.giac101_Response_to_Reviewer_Comments_Revision_1Click here for additional data file.giac101_Reviewer_1_Report_Original_SubmissionRobbin Bouwmeester -- 6/24/2022 ReviewedClick here for additional data file.giac101_Reviewer_1_Report_Revision_1Robbin Bouwmeester -- 8/25/2022 ReviewedClick here for additional data file.giac101_Reviewer_2_Report_Original_SubmissionKarl Mechtler -- 7/29/2022 ReviewedClick here for additional data file.giac101_Supplemental_FileClick here for additional data file."} +{"text": "Germline genetic variants modulate human immune response. We present analytical pipelines for assessing the contribution of hosts\u2019 genetic background to the immune landscape of solid tumors using harmonized data from more than 9,000 patients in The Cancer Genome Atlas (TCGA). These include protocols for heritability, genome-wide association studies (GWAS), colocalization, and rare variant analyses. These workflows are developed around the structure of TCGA but can be adapted to explore other repositories or in the context of cancer immunotherapy.For complete details on the use and execution of this protocol, please refer to \u2022Pipelines for assessing the contribution of germline genetics on tumor immune contexture\u2022Workflow for data download, processing, assembly, curation, and annotation\u2022Protocols for heritability, GWAS, colocalization, and rare variant analysis\u2022Visualization tools for exploration of the results by iAtlas and PheWeb Publisher\u2019s note: Undertaking any experimental protocol requires adherence to local institutional guidelines for laboratory safety and ethics. Germline genetic variants modulate human immune response. We present analytical pipelines for assessing the contribution of hosts\u2019 genetic background to the immune landscape of solid tumors using harmonized data from more than 9,000 patients in The Cancer Genome Atlas (TCGA). These include protocols for heritability, genome-wide association studies (GWAS), colocalization, and rare variants analysis. These workflows are developed around the structure of TCGA but can be adapted to explore other repositories or in the context of cancer immunotherapy. These protocols describe specific bioinformatic workflows for the analyses of The Cancer Genome Atlas (TCGA) genomic datasets and well-characterized immune traits . HoweverTiming: 1\u20133\u00a0weeks (for step 1)1.To apply for dbGAP, an institutional account is required.2.https://dbgap.ncbi.nlm.nih.gov/aa/wga.cgi?page=login.Apply for dbGaP authorization to access TCGA and GTEx controlled access data: 3.https://www.ncbi.nlm.nih.gov/projects/gap/cgi-bin/GetPdf.cgi?document_name=GeneralAAInstructions.pdf.Prepare a data access request: CRITICAL: Preparing the application is fast, however, the review process of the application can take few weeks and should be considered ahead of time.Authorization from the database of Genotypes and Phenotypes (dbGaP) is necessary to access TCGA germline genetic data (whole exome sequencing (WES) and single nucleotide polymorphism (SNP) array data, including derived imputed SNP data) and Genotype-Tissue Expression (GTEx) genotype data. In addition, download of GTEx summary statistics from the GTEx Google Cloud bucket is a requester-paid download service .b.https://www.cog-genomics.org/plink2.Download and software documentation is available at: PLINK installation.5.a.Install bcftools (1.9 or current version) .b.https://samtools.github.io/bcftools/.Download and software documentation is available at: bcftools installation.6.a.Install GCTA (1.91.2beta or current version) .b.https://cnsgenomics.com/software/gcta.Download and software documentation is available at: Genome-wide Complex Trait Analysis (GCTA) software package installation.7.a.https://www.r-project.org/.Install R (3.5.0 or current version). Download and software documentation is available at: b.https://www.bioconductor.org/.Install Bioconductor (3.7 or current version) . Installc.https://cran.r-project.org/web/packages/snplist/index.html.Install the R package: snplist (0.18.1 or current version) . Installd.https://bioconductor.org/packages/release/data/annotation/html/SNPlocs.Hsapiens.dbSNP144.GRCh37.html.Install the Bioconductor package: SNPlocs.Hsapiens.dbSNP144.GRCh37 (0.99.20 or current version) . Installe.grch37.ensembl.org. Installation instructions and documentation is available at: https://bioconductor.org/packages/release/bioc/html/biomaRt.html.Install the Bioconductor package: biomaRt (2.36.1 or current version) Durinck. Use hosf.https://bioconductor.org/packages/release/bioc/html/GenomicRanges.html.Install the Bioconductor package: GenomicRanges (1.32.7 or current version) . Installg.https://bioconductor.org/packages/release/bioc/html/rtracklayer.html.Install the Bioconductor package: rtracklayer (1.40.6 or current version) . Installh.https://bioconductor.org/packages/release/bioc/html/AnnotationHub.html.Install the Bioconductor package: AnnotationHub (2.12.1 or current version) . Installi.https://bioconductor.org/packages/release/data/annotation/html/EnsDb.Hsapiens.v86.html.Install the Bioconductor package: EnsDb.Hsapiens.v86 (2.99.0 or current version) . InstallR/Bioconductor and related packages installation.8.a.http://locuszoom.org/.Install LocusZoom (Genome Build/LD Population: hg19/100 Genomes Nov 2014 EUR) from httLocusZoom installation.9.a.http://zarlab.cs.ucla.edu/tag/ecaviar/.eCAVIAR from htteCAVIAR installation.Note: Indicated software and package versions were used as described in and its binary counterpart BCF.GCTA: Used for heritability analysis. GCTA is one of first and well-established software packages for estimation of the proportion of phenotypic variance explained by all genome-wide SNPs for a complex trait allows us to integrate various sources of Linkage Disequilibrium (LD).eCAVIAR: Used for colocalization analysis. eCAVIAR is a commonly used method for colocalization analyses, and, as compared with other methods, has the advantage of modeling the LD .The TCGA quality-controlled genotyping data was imputed to the Haplotype Reference Consortium (HRC) panel . These db.i.Information on composition of genotyping files:\u201cREAD_ME.txt\u201d.ii.File mapping of TCGA Patient ID to corresponding Birdseed genotyping files:\u201cMap_TCGAPatientID_BirdseedFileID.txt\u201d.iii.QC Unimputed Genotyping Data:\u201cREAD_ME_1.txt\u201d .iv.HRC Imputed Genotyping Data:\u201cREAD_ME_4.txt\u201d .Download the open-access files from the \u201cTCGA QC HRC Imputed Genotyping Data used by the AIM AWG from \u201d sectionc.i.To download the controlled access data, follow instructions under the \u201cInstructions for Data Download\u201d for \u201cControlled Access Data\u201d. The necessary manifest files are found under the \u201cData in the GDC\u201d section for \u201cControlled Access Data\u201d.ii.Download the following \u201cQC Unimputed Genotyping Data\u201d files:\u201cQC_Unimputed_plink.zip\u201d .iii.Download the following \u201cHRC Imputed Genotyping Data\u201d files:\u201cHRC imputed genotyping data for chromosome 1 - chr_1.zip\u201d.\u201cHRC imputed genotyping data for chromosome 2 - chr_2.zip\u201d.\u201cHRC imputed genotyping data for chromosome 3 - chr_3.zip\u201d.\u201cHRC imputed genotyping data for chromosome 4 - chr_4.zip\u201d.\u201cHRC imputed genotyping data for chromosome 5 - chr_5.zip\u201d.\u201cHRC imputed genotyping data for chromosome 6 - chr_6.zip\u201d.\u201cHRC imputed genotyping data for chromosome 7 - chr_7.zip\u201d.\u201cHRC imputed genotyping data for chromosome 8 - chr_8.zip\u201d.\u201cHRC imputed genotyping data for chromosome 9 - chr_9.zip\u201d.\u201cHRC imputed genotyping data for chromosome 10 - chr_10.zip\u201d.\u201cHRC imputed genotyping data for chromosome 11 - chr_11.zip\u201d.\u201cHRC imputed genotyping data for chromosome 12 - chr_12.zip\u201d.\u201cHRC imputed genotyping data for chromosome 13 - chr_13.zip\u201d.\u201cHRC imputed genotyping data for chromosome 14 - chr_14.zip\u201d.\u201cHRC imputed genotyping data for chromosome 15 - chr_15.zip\u201d.\u201cHRC imputed genotyping data for chromosome 16 - chr_16.zip\u201d.\u201cHRC imputed genotyping data for chromosome 17 - chr_17.zip\u201d.\u201cHRC imputed genotyping data for chromosome 18 - chr_18.zip\u201d.\u201cHRC imputed genotyping data for chromosome 19 - chr_19.zip\u201d.\u201cHRC imputed genotyping data for chromosome 20 - chr_20.zip\u201d.\u201cHRC imputed genotyping data for chromosome 21 - chr_21.zip\u201d.Note: Please cite format, which is more suitable for Excel.Upload \u201cvep_input.txt\u201d to c.Combine annotations per SNP. VEP provides multiple annotations per SNP. SNPs might map to different genes and could have several biological impacts on nearby genes. Use R script \u201cCombineVEPannotations.R\u201d to combine annotations.d.Example of combined annotations: \u201cCXXC5:synonymous_variant:LOW|CXXC5:upstream_gene_variant:MODIFIER\u201d.Download the Ensembl Variant Effect Predictor\u00a0(VEP) annotation.16.a.https://egg2.wustl.edu/roadmap/web_portal/chr_state_learning.html#exp_18state.Access the Expanded 18-state model for Build GRCh37/hg19: b.Under \u201cMNEMONICS BED FILES\u201d section, download the archive of all mnemonics bed files: \u201call.mnemonics.bedFiles.tgz\u201d.Download Roadmap Epigenomics Project Epigenomic State Model.Timing: 5\u00a0minThe scripts and code descriptions used in the entirety of this protocol are available at:https://github.com/rwsayaman/TCGA_PanCancer_Immune_Genetics.17.a.Review all \u201cREADME\u201d files for each section of workflow.b.Ensure all the necessary code has been downloaded.Download or clone the GitHub repository: \u201cTCGA_PanCancer_Immune_Genetics\u201d .a.ReviewOptional: This protocol is designed to work with pre-processed and quality-controlled genotyping data. If users start from raw genotyping data from SNP arrays, please see the companion protocol for quality-control analysis of germline data, stranding and genotype imputation from the University of California, San Francisco (UCSF) Wynton high-performance (HPC) cluster, which currently contains 449 nodes with over 12572 Intel CPU cores and 42 nodes containing a total of 148 NVIDIA GPUs, (2) the original UCSF TIPCC HPC cluster (now C4), which had 8 communal compute nodes and 1 dedicated node, each with 12\u201364 cores , and (3) two additional severs with 32 and 48 CPUs (Intel(R) Xeon(R) CPU E5-2650 0 @ 2.00 GHz), respectively. In the optimization phase, analyses have been performed by different operators and at multiple times to ensure accuracies and reproducibility. As these are shared servers, time might considerably vary in function of the number of nodes available. From a computational point of view, the most time-consuming step is the GWAS . However, file preparation, as well as annotation and curation of the output files is particularly time consuming. An estimated time, considering these manual steps and factoring in some troubleshooting time .20.2\u00a0<\u00a00.5. The imputation R2 is the estimated value of the squared correlation between imputed genotypes and true, unobserved genotypes.a.2 \u2265 0.5.Filter each \u201cchr\u2217.dose.vcf.gz\u201d files and generate \u201cchr\u2217.rsq0.5.dose.vcf.gz\u201d imputed genotype files of SNPs with Rb.Index and generate the corresponding \u201cchr\u2217.rsq0.5.dose.vcf.gz.tbi\u201d files.c.Generate the corresponding filtered \u201cchr\u2217.info.rsq0.5.gz\u201d information files.For each chromosome, filter the HRC imputed genotyping data to exclude SNPs with imputation R21.a.Convert VCF \u201cchr\u2217.rsq0.5.dose.vcf.gz\u201d files to PLINK \u201ctcga_imputed_hrc1.1_rsq0.5_chr\u2217.bed\u201d files.b.Optional: Rename SNP ID names in the HRC imputed dataset with alleles listed in alphabetical order to assist matching with other datasets.Note: This only affects the SNP ID name and not the encoding of the A1 and A2 alleles in PLINK which is maintained.c.Optional: In PLINK, exclude SNPs with MAF\u00a0<\u00a00.005 (--maf).Note: Minor allele frequency (MAF) filtering can be performed at this step to reduce the HRC imputed genotyping data input file size. Alternatively, GWAS analysis can be performed using\u00a0the R2 filtered-HRC imputed genotyping data from \u201cNote: Scripts and code description used in this section are available at: https://github.com/rwsayaman/TCGA_PanCancer_Immune_Genetics.For each chromosome, filter HRC imputed genotyping data to exclude SNPs with minor allele frequency (MAF)\u00a0<\u00a00.005.This section describes the pre-processing steps necessary to use the HRC Imputed Genotyping Data we generated as a general resource for this specific analysis .19.UnzipPre-processing of Imputed Data. The example code in this section were optimized for the high-performance compute environment at UCSF HPC employing Portable Batch System (PBS) job scheduling; consult your system administrator to adapt the provided code to your system.Direct link: Note: \u201cTable\u00a0S1\u201d and \u201cTable\u00a0S2\u201d in the \u201cThe overall disk space needed for the project is 2.67 TB. Breakdown for different data types and analyses are provided below .Table\u00a01DTiming: 1\u00a0day1.a.Curated set of 139 immune traits in TCGA can be downloaded from \u201cTable\u00a0Sb.i.Code to generate gene expression signatures from Amara e and singii.Necessary transformation of immune trait values in TCGA for use in genetic analysis described below are annotated in \u201cTable\u00a0SFor calculation of immune traits in a new dataset, consult methods Prepare curated set immune traits genes.2.Calculate Pearson correlations of continuous values of the 139 immune traits.3.a.Cluster correlation matrix using complete agglomerative hierarchical clustering method and (1-correlation) as distance metric.Generate a correlation heatmap with a hierarchical clustering dendrogram.4.Identify highly correlated blocks (dendrogram clusters) of immune traits to generate immune functional modules.Note: Scripts and code description used in this section are available at: https://github.com/rwsayaman/TCGA_PanCancer_Immune_Genetics.139 immune traits used in the analyses were curated from by selecImmune Traits.Direct link: Timing: 2\u00a0weekshttps://cnsgenomics.com/software/gcta).5.a.Formatted input file of TCGA immune traits is provided in the GitHub rFormat immune trait input file:Heritability analysis on 139 traits is conducted using a mixed-model approach implemented in the genome-wide complex trait analysis (GCTA) software package with the genomic-relatedness-based restricted maximum-likelihood (GREML) method Yang et. This ca6.a.Ancestry assignments for each TCGA individual are provided in \u201cTable\u00a0S1. TCGA Sample List Used in the Analysis\u201d from .b.Formatted input file of TCGA patient barcodes assigned to each genetic ancestry cluster is provided in the GitHub rIdentify genetic ancestry assignment of each individual and create a filtered sample list for each genetic ancestry cluster:\u201cImmune.pheno.139.source.coded.TCGAID.9769.txt\u201d.\u201cTCGAID_Cluster1.EUR.8036.txt\u201d\u201cTCGAID_Cluster2.ASIAN.605.txt\u201d\u201cTCGAID_Cluster3.AFR.904.txt\u201d7.European=7,813, NAfrican=863, NAsian=570, and NAmerican=209 individuals), subset individuals belonging to specified ancestry group from the QC TCGA HRC imputed genotyping data in PLINK (--keep) using ancestry assignments.To conduct heritability analyses within each ancestry subgroup .8.Estimate the genetic relatedness matrix (GRM) from all the autosomal SNPs with MAF\u00a0>\u00a00.01 within each ancestry group using GCTA:gcta64 --bfile [input_filename]--autosome--maf 0.01--make-grm--out [output_filename]--thread-num [numeric_value_number_threads]\u201cqsub_plink_whitelist_geno_mind_unique.indv_chr.auto_hardy.nonriskSNP_maf_uniqueSNP_TCGAID_ancestry.txt\u201d.See script:9.Filter out individuals for relatedness. GCTA removes one of a pair of individuals with estimated relatedness larger than the specified cut-off value (cut-off\u00a0= 0.05):gcta64 --grm [input_filename]--grm-cutoff 0.05--make-grm--out [output_filename]\u201cqsub_gcta_whitelist_geno_mind_unique.indv_chr.auto_hardy.nonriskSNP_maf_uniqueSNP_TCGAID_ancestry_grm.txt\u201d.See script:10.Run GCTA GREML unconstrained to estimate variance explained by SNPs with defined categorical and continuous covariates using the following parameters:gcta64 --reml-no-constrain--grm [input_filename]--pheno [immune_trait_matrix_filename]--mpheno [immune_trait_numeric_index_input_matrix]--covar [categorical_covariates_filename]--qcovar [continuous_covariates_filename]--thread-num [numeric_value_number_threads]--out [output_filename]\u201cqsub_gcta_whitelist_geno_mind_unique.indv_chr.auto_hardy.nonriskSNP_maf_uniqueSNP_TCGAID_ancestry_grm_grm.cutoff.0.05.txt\u201d.CRITICAL: Run heritability analysis unconstrained. This will produce heritability estimates (Vg/Vp) and standard deviations outside the 0\u20131 range.11.From GCTA GREML \u201c.hsq\u201d result file, extract the ratio of genetic variance to phenotypic variance (Vg/Vp), estimate and SE; the LRT p-value and sample size (n)\u00a0for each immune trait.12.a.Annotate each result file with the corresponding immune trait, immune category and immune module .b.Append annotated result files from each immune trait.Concatenate heritability analysis results across all immune traits tested.13.Correct for multiple-hypothesis testing ancestry group by calculating the FDR p-value using the Benjamini-Hochberg adjustment method.14.Optional: Visualize % heritability (Vg/Vp \u2217 100) across all immune traits per ancestry group for exploratory data analysis.Note: Scripts used in this section are available at: https://github.com/rwsayaman/TCGA_PanCancer_Immune_Genetics.See script: \u201cqsub_grm.cutoff.0.05_greml_EUR.ImmunePheno216_CancerTypeSex.covar_PCA.AgeYears.qcovar.txt\u201d.Heritability Analysis. The example code in this section were optimized for the high-performance compute environment at UCSF HPC employing Portable Batch System (PBS) job scheduling; consult your system administrator to adapt the provided code to your system.Direct link: https://www.cri-iatlas.org/), in the \u201cGermline Analysis\u201d module analysis in PLINK in each ancestry cluster (--genome) and filter\u00a0individuals out for relatedness (pihat\u00a0<\u00a00.25). This leaves n=9,603 unrelated individuals in the TCGA cohort in the output file:GWAS were performed on traits that we found to have significant heritability since these would be most likely driven by common genetic variants.16.Recalculate allele frequencies in PLINK for the subset of individuals used in the analysis in PLINK:plink --bed [input_bed_filename]--bim [input_bim_filename]--fam [input_fam_filename]--allow-no-sex--keep-allele-order--keep [GWAS.IBD.ALL.TCGAID.txt]--freq--out [Freq/output_filename]\u201cGWAS.IBD.ALL.TCGAID.txt\u201d.17.Prepare the covariate file:See script: \u201cqsub_plink_freq_GWAS.IBD.ALL.txt\u201d.18.Prepare the phenotype file:\u201ccovar.GWAS.IBD.ALL.txt\u201d.19.Run linear association analysis for each continuous immune traits in PLINK:plink --bed [input_bed_filename]--bim [input_bim_filename]--fam [input_fam_filename]--allow-no-sex--keep-allele-order--keep [GWAS.IBD.ALL.TCGAID.txt]--pheno [Immune.phenotype.33.Set\u2217.GWAS.txt]--all-pheno--covar [covar.GWAS.IBD.ALL.txt]--linear hide-covar--out [output_filename]\u201cImmune.phenotype.33.Set\u2217.GWAS.txt\u201d.20.Run logistic regression on dichotomized discrete immune traits in PLINK by modifying the code in step 19 using the logistic command (--logistic).Note: \u201c.bed\u201d, \u201c.bim\u201d and \u201c.fam\u201d input files are provided separately in the sample code because SNP ID names from the original HRC imputed dataset were renamed with alleles listed in alphabetical order to assist matching with other datasets. This only affects the SNP ID name and not the encoding of the A1 and A2 alleles in PLINK which is maintained.21.\u22128 and suggestive significance at p\u00a0<\u00a01\u00a0\u00d7\u00a010\u22126 in our study.Filter resulting summary statistics from PLINK based on test p-values (P in PLINK). Genome-wide significance was defined at p\u00a0<\u00a05\u00a0\u00d7\u00a010See script: \u201cqsub_plink_linear_GWAS.IBD.ALL_Immune.33.Wolf.Set1.txt\u201d.See script:22.Optional: Visualize results for exploratory data analysis:a.Manhattan plot, plotting GWAS -log10 p-value against the base pair position per chromosome.b.Quantile-quantile plot (Q-Q plot), plotting the quantile distribution of observed p-values for each SNP against expected values from a theoretical \u03c72-distribution; calculate the genomic inflation factor (lamba), the median of the \u03c72 test statistics divided by the expected median of the \u03c72 distribution.Note: Interactive visualization of GWAS from (https://www.cri-iatlas.org/), in the \u201cGermline Analysis\u201d module or using the PheWeb tool (https://pheweb-tcga.qcri.org/) .i.Map the Minimac3 HRC imputation information to the GWAS summary stats using the variant identifier.See script: \u201cr_plotResults_GWAS.IBD.ALL_Immune.33.Set\u2217.r\u201d and \"qsub_r_plotResults_GWAS.IBD.ALL_Immune.33.Set\u2217.txt\".The Minimac3 HRC imputation information for each SNP (extracted from the filtered chr\u2217.info.rsq0.5.gz), including whether SNP was genotyped or imputed (Genotyped), the estimated value of the squared correlation between imputed genotypes and true, unobserved genotypes (Rsq), the average probability of observing the most likely allele for each haplotype , minor allele frequency of the variant in the imputed data (MAF) for the GWAS study set calculated from \u201cc.https://www.rdocumentation.org/packages/BSgenome/versions/1.40.1/topics/SNPlocs-class).i.Transform GWAS results chromosome and base pair position into a GRanges object using the R \u201cGenomicRanges\u201d package.ii.Define the set of SNPs as the \u201cSNPlocs.Hsapiens.dbSNP144.GRCh37\u201d dataset.iii.Overlap the GRanges chromosome and base pair position with the \u201cSNPlocs.Hsapiens.dbSNP144.GRCh37\u201d dataset using the snpsByOverlaps function.iv.Merge annotated data with results file.Note: Not all SNPs have corresponding rsIDs.See script: \u201cr_annotation_SNP.r\".Map the genomic chromosome and base pair position to corresponding SNP rsIDs and IUPAC nucleotide ambiguity codes using the R \u201cSNPlocs.Hsapiens.dbSNP144.GRCh37\u201d package (d.grch37.ensembl.org (https://uswest.ensembl.org/info/data/biomart/index.html).i.Create SNP information table (setSNPTable function) using SNP rsID, chromosome and base pair position.ii.grch37.ensembl.org.Create a gene information table (setGeneTable function) using gene attributes extracted from R \u201cbiomaRt\u201d package using host iii.Find overlaps of the SNP and gene information tables, setting the margin (bp) to the desired distance from SNP of interest (makeGeneSet function).See script: \u201cr_annotation_SNP.r\".The nearest genes to SNP of interest using R \u201csnplist\u201d package to map rsID to gene maps extracted via the R \u201cbiomaRt\u201d package using host e.i.Map the VEP annotation file with:a.24.a.Annotate each summary stat file with the corresponding immune trait, immune category and immune module summary stats across all immune traits tested.25.\u22128). This excludes the HLA locus on chr 6 which spans \u223c3.5 MB.Identify genome-wide significant loci as SNPs within 50 KB region with at least one genome-wide significant SNP (p\u00a0<\u00a05\u00a0\u00d7\u00a010Note: Scripts and code description used in this section are available at: https://github.com/rwsayaman/TCGA_PanCancer_Immune_Genetics.\u201cr_plotResults_GWAS.IBD.ALL_Immune.33.Set\u2217.r\u201d and \"qsub_r_plotResults_GWAS.IBD.ALL_Immune.33.Set\u2217.txt\".Genome-Wide Association Study (GWAS). The example code in this section were optimized for the high-performance compute environment at UCSF HPC employing Portable Batch System (PBS) job scheduling; consult your system administrator to adapt the provided code to your system.Direct link: Timing: 1\u20132\u00a0dayshttps://egg2.wustl.edu/roadmap/web_portal/chr_state_learning.html#exp_18state.This section describes the mapping of genome-wide significant and suggestive SNPs to the Roadmap Epigenomics Project Epigenomic Expanded 18-state model which uses 6 marks across 98 epigenomes: 26.a.Import the GWAS suggestive and genome-wide significant SNP result table into R.b.Create a unique data frame of SNP IDs, chromosome and base pair positions.c.Convert into GRanges object using the makeGRangesFromDataFrame function in \u201cGenomicRanges\u201d package.Transform the Immune-Germline GWAS suggestive and genome-wide significant SNPs results table into a GRanges object using the \u201cGenomicRanges\u201d package in R.27.a.Using \u201crtracklayer\u201d package in R, import each epigenome bed file with the annotated Epigenomic Expanded 18-state model.b.At each GRanges SNP chromosome and base pair position, extract the corresponding epigenomic state from each epigenome using the mergeByOverlaps function in \u201cGenomicRanges\u201d package.Iteratively loop and import each Roadmap Epigenomics Project epigenome, and extract epigenomic states that overlap each of the Germline-Immune SNP chromosome and base pair position.28.Map epigenome IDs to corresponding epigenome descriptions of source cell types and tissue types using the \u201cRoadmap.metadata.qc.jul2013.csv\u201d annotation file.29.Annotate epigenetic states with published color schema using \u201cFigureColors.csv\u201d file.30.Manually curate immune-associated epigenomes via cell type or tissue of origin .Note: Scripts and code description used in this section are available at: https://github.com/rwsayaman/TCGA_PanCancer_Immune_Genetics..26.TransEpigenomic Analysis.Direct link: Timing: 1\u00a0weekhttps://gtexportal.org/home/index.html. For the TCGA dataset, the gene expression matrix was downloaded from: https://gdc.cancer.gov/about-data/publications/pancanatlas and analysis was conducted locally.31.a.i.Download gene expression matrix: \u201cEBPlusPlusAdjustPANCAN_IlluminaHiSeq_RNASeqV2.geneExp.tsv\u201d.ii.https://grch37.ensembl.org/biomart/martview/. Required columns from the \u201cmart_report.txt\u201d output files are: \u201cChromosome\u201d, \u201cStart\u201d, \u201cEnd\u201d, and \u201cGene type\u201d.Download genes\u2019 starts, ends, and types from \u201cbiomaRt\u201d package from iii.Transpose the gene expression matrix. This can be run with R script \u201cOrganizeRNAMatrix.R\u201d.iv.Run association analysis between genome-wide and suggestive SNPs and genes within 1 MB using a linear regression model accounting for genetic ancestry PC1-7, sex, age, and cancer type . This cv.Summarize outcome: results will have the following: chromosome, position, gene name, distance from gene, pan-cancer sample size, pan-cancer effect size, pan-cancer p-value, and then the same information repeated for each cancer type.eQTL.b.i.https://gdc.cancer.gov/about-data/publications/PanCanAtlas-Splicing-2018.Download 5\u2032, 3\u2032, exon skipping, intron retention, and mutually exclusive exon splice events data from ii.Organize the data input and format it for SNP-splicing event association analysis. Use\u00a0\u201cscript PrepareData.sh\u201d. This script keeps the genes that mapped to the most significant SNPs . Example: grep \u2013f ListGenes.txt splice3prime\u00a0>\u00a0Data_3prime. It creates two more files that contain TCGA subject IDs, and splicing event IDs and types. It finally runs an R script \u201cAnalyze.R\u201d that performs association analysis between SNP genotypes and splicing events using linear regression model accounting for genetic ancestry PC1-7, sex, age, and cancer type .iii.Summarize outcome, an output file containing association results as follows: chromosome, position, ensemble ID, gene name, splicing event ID.sQTL.TCGA Analysis.32.a.i.Import the GRanges object of SNP chromosome and base pair positions from 13.ii.Load the chain file \u201cAH14150\u201d for Homo sapiens rRNA hg19 to hg38 from the \u201cAnnotationHub\u201d package in R.iii.Convert SNP chromosome and base pair positions from build GRCh37 to build GRCh38 using the liftOver function in the \u201crtracklayer\u201d package in R.See script: \u201cGWAS.SNPs_liftOver.GRCh38_extended.r\u201d.Convert all GWAS suggestive and genome-wide significant SNP chromosome and base pair positions from build GRCh37 to build GRCh38 to match GTEx annotation.b.i.From each tissue type, extract only GTEx eQTL SNP results that match the GWAS SNP GRCh38 chromosome and base pair position. The output is an R object of GTEx eQTL for suggestively significant variants.See script: \u201cGTEx.eQTL.all.assoc_extract_GWAS.sugg.SNPs.server.r\u201dii.Concatenate filtered GTEx eQTL files from each tissue corresponding to the GWAS suggestively significant variants. Iteratively load, annotate with tissue source, extract GRCh38 chromosome and base pair position from variant ID, and append each file into a single data frame in R.iii.Calculate the false discovery rate (FDR) per variant across all genes and all tissues.iv.Using the GTEx Ensembl IDs, map to gene symbol and Entrez IDs using the \u201cEnsDb.Hsapiens.v86\u201d package in R.v.Merge with Immune-Germline SNP annotation.vi.Extract GTEx eQTL results for variant-gene pairs with an FDR\u00a0<\u00a00.05 in at least one tissue. Exclude the HLA and IL17R locus which are simple eQTLs.vii.Optional: Visualize results by plotting the GTEx eQTL -log10 FDR p-value against the distance from the TSS (\u201ctss_distance\u201d).See scripts: \u201cGTEx.eQTL.all.assoc_processResults_GWAS.sugg.SNPs_extended.r\u201d and \u201cGTEx.eQTL.all.assoc_processResults_GWAS.sugg.SNPs_1mb_extended_plot.r\u201d.eQTL.c.i.From each tissue type, extract only GTEx sQTL SNP results that match the GWAS SNP GRCh38 chromosome and base pair position. The output is an R object of GTEx eQTL for suggestively significant variants.ii.Run the Linux bash script \u201crun_split_sqtl.sh\u201d. Each GTEx sQTL file is very large. This step\u00a0separates the large GTEx sQTL file into a number of small files. The input of this\u00a0script is a list of file names for GTEx sQTL. Each line of this input file is a file name for GTEx sQTL. The script generates a number of small files for each original GTEx sQTL file.iii.Run the R script \u201cr_extract.txt\u201d. This script takes 2 input files. One is an R object for GWAS suggestively significant variants. The other input file is the GTEx sQTL file generated from the previous step. The output is an R object of GTEx sQTL for suggestively significant variants.iv.Concatenate filtered GTEx sQTL files from each tissue corresponding to the GWAS suggestively significant variants. Iteratively load, annotate with tissue source, extract GRCh38 chromosome and base pair position from variant ID, and append each file into a single data frame in R.v.For sQTL, limit analysis to\u00a0+/- 500 KB region. Filter the resulting concatenated GTEx\u00a0sQTL file using the absolute value of the \u201ctss_distance\u201d, set a threshold \u2264 500,000\u00a0bp.vi.Calculate the false discovery rate (FDR) per variant across all genes and all tissues.vii.Using the GTEx Ensembl IDs, map to gene symbol and Entrez IDs using the \u201cEnsDb.Hsapiens.v86\u201d package in R.viii.Merge with Immune-Germline SNP annotation.ix.Extract GTEx sQTL results for variant-gene pairs with an FDR\u00a0<\u00a00.05 in at least one tissue. Exclude the HLA and IL17R locus which are simple eQTLs.x.Optional: Visualize results by plotting the GTEx sQTL -log10 FDR p-value against the distance from the TSS (\u201ctss_distance\u201d).Note: Scripts and code description used in this section are available at: https://github.com/rwsayaman/TCGA_PanCancer_Immune_Genetics.See scripts: \u201cGTEx.sQTL.all.assoc_processResults_GWAS.sugg.SNPs_500kb _extended.r\u201d and \u201cGTEx.sQTL.all.assoc_processResults_GWAS.sugg.SNPs_500kb_extended_plot\u201d.sQTL.GTEx Analysis.This section describes eQTL and sQTL analysis in TCGA and GTEx data. For GTEx data, eQTL/sQTL summary statistics across tissues were downloaded from Expression and splicing quantitative trait locus analysis (eQTLs and sQTL). The example code in this section were optimized for the high-performance compute environment at UCSF HPC employing Portable Batch System (PBS) job scheduling; consult your system administrator to adapt the provided code to your system.Direct link: Timing: 1 Week33.a.Create a file to determine SNP-gene-trait to be tested for colocalization. Run R script \u201cDetermineRegions.R\u201d. This script reads eQTL results and keeps SNP-gene eQTL FDR p\u00a0<\u00a00.1. Output of this script will be a file that contains 5 columns: chromosome, position, gene name, trait, and SNP significance (GW or suggestive).b.For each SNP-gene pair, run eQTL analysis between the SNP and the gene, and also between the 200 extra SNPs centered at the selected SNP and the gene. Use R script \u201cRunEQTL.sh\u201d. This script creates a folder for each SNP-gene pair. It extracts the list of 201 SNPs, calculates LD matrix (plink \u2013bfile EXTRACTED \u2013r square \u2013out EXTRACTED), and performs GWAS and eQTL association analysis using the \u201ceQTL.R\u201d R script. \u201ceQTL.R\u201d script outputs two files: one for GWAS and one for eQTL analysis containing the z-score, beta, and p-value.c.eCAVIAR -o coloc_c1.out -l EXTRACTED.ld -l EXTRACTED.ld -z GWAS.z -z eQTL.z -f 1 -c 1Run eCAVIAR using \u201cRunECAVIAR.sh\u201d script. It calls eCAVIAR as follows:The \u201cGWAS.z\u201d and \u201ceQTL.z\u201d are the z-score produced in the previous step. \u201cEXTRACTED.ld\u201d is a 1-line file that contains the LD between the selected SNP and the 200 SNPs surrounding it (100 SNP to the left and 100 SNPs to the right). The -c flag indicates the number of causal variants assumed in the model . The output that contains the colocalization posterior probability (CLPP) for each SNP is \u201ccoloc_c1.out_col\u201d.d.The same strategy is conducted for sQTL results.TCGA Analysis.34.a.Run the R script \u201cr_match_tcga_gtex.txt\u201d to match the effect allele between GTEx eQTL/sQTL results and TCGA GWAS results. It then calculates Z scores for both GTEx eQTL/sQTL and TCGA GWAS results.b.i.A table for index SNPs. The 1st column should be SNP ID and the 3rd column should be base-pair position in build 38.ii.A list of GTEx eQTL/sQTL results. Each line is the name of a GTEx eQTL/sQTL result. The last part of the file name should be \u201c_rsid.txt\u201d. Each GTEx eQTL/sQTL result file has the following columns: \u201cgene_id\u201d, \u201cvariant_id\u201d, \u201ctss_distance\u201d, \u201cma_samples\u201d, \u201cma_count\u201d, \u201cmaf\u201d, \u201cpval_nominal\u201d, \u201cslope\u201d, and \u201cslope_se\u201d.iii.A PLINK \u201c.bim\u201d file for TCGA genotype data. It has the following columns: chromosome, SNP ID, genetic distance, base-pair position, minor allele, and major allele.iv.A GWAS result file for TCGA. It has the following columns: chr:bp, CHR, SNP ID, BP, A1, A2, Genotyped, Rsq, AvgCall, MAF, Stratified.Freq, NCHROBSTEST, NMISS, BETA, STAT, and P.This R script requires 4 input files:c.i.Z score for GTEx eQTL/sQTL result.ii.Z score for TCGA GWAS result.iii.The list of SNPs in GTEx eQTL/sQTL result.iv.The list of SNPs in TCGA GWAS result.v.The list of SNPs and effect alleles in GTEx eQTL/sQTL result.vi.The list of SNPs and effect alleles in TCGA GWAS result.This R script generates 6 output files:d.Output files iii-vi will be used to generate LD matrix for eCAVIAR. Output files i and ii will be used directly for eCAVIAR.e.i.Run the python script \u201cmake_plink_command_gtex.py\u201d and \u201cmake_plink_command_tcga.py\u201d to generate PLINK commands for GTEx and TCGA separately. The output is a Linux bash file that contains a number of PLINK commands.ii.Run the bash file from steps 34e\u2013i.plink --bfile /wynton/scratch/dhu/gtex_geno/GTEx_WGS_chr11--extract snps_gtex_ENSG00000005801_Nerve_Tibial_ZNF195_rs7951724.txt --make-bed --out temp_gtexplink --bfile temp_gtex --a1-allele snps_alt_gtex_ENSG00000005801_Nerve_Tibial_ZNF195_rs7951724.txt 2 1 --recode A --out Geno_gtex_ENSG00000005801_Nerve_Tibial_ZNF195_rs7951724Example for GTEx:plink --bfile/wynton/scratch/dhu/tcga_geno/tcga_imputed_hrc1.1_noMissnp_b38_chr11 --extract snps_tcga_ENSG00000005801_Nerve_Tibial_ZNF195_rs7951724.txt --make-bed --out temp_tcgaplink --bfile temp_tcga --a1-allele snps_alt_tcga_ENSG00000005801_Nerve_Tibial_ZNF195_rs7951724.txt 2 1 --recode A --out Geno_tcga_ENSG00000005801_Nerve_Tibial_ZNF195_rs7951724Example for TCGA:Run PLINK commands to generate numeric genotype data for GTEx and TCGA for SNPs that were extracted as output files iii and iv from the previous step.f.Run the R scripts \u201cr_cor_gtex.txt\u201d and \u201cr_cor_tcga.txt\u201d to calculate Pearson correlation coefficients for each pair of SNPs in GTEx and TCGA genotype data. There are 2 input files for each R script. The 1st one is the genotype file that was generated from the previous step. The 2nd one is the file with SNP ID and alternate allele. This file was generated as the output file 5 or 6 in the step for running \u201cr_match_tcga_gtex.txt\u201d.g.i.Run the python script \u201cmake_ecaviar.py\u201d to generate commands to run eCAVIAR. The output is a Linux bash script that contains a number of eCAVIAR commands.ii.Run the Linux bash script from steps 34g\u2013i.eCAVIAR -l Cor_tcga_ENSG00000281491_Minor_Salivary_Gland_DNAJB5-AS1_rs72729406.txt -l Cor_gtex_ENSG00000281491_Minor_Salivary_Gland_DNAJB5-AS1_rs72729406.txt -z Result_tcga_ENSG00000281491_Minor_Salivary_Gland_DNAJB5-AS1_rs72729406.txt -z Result_gtex_ENSG00000281491_Minor_Salivary_Gland_DNAJB5-AS1_rs72729406.txt -c 2 -f 1 -o Result_eCAVIAR_c2_ENSG00000281491_Minor_Salivary_Gland_DNAJB5-AS1_rs72729406.txtNote: All programming scripts for GTEx colocalization were run on the UCSF Wynton HPC (https://wynton.ucsf.edu/hpc/) employing Portable Batch System (PBS) job scheduling. An example of script for submitting jobs on the cluster is \u201cqsub_run_plink_gtex.txt\u201d. The command for submitting the job is \u201cqsub_run_plink_gtex.txt\u201d and depends on the setup of the HPC cluster; consult your system administrator to adapt the provided code to your system.Note: Scripts and code description used in this section are available at: https://github.com/rwsayaman/TCGA_PanCancer_Immune_Genetics.Example:Run eCAVIAR assuming 2 causal SNPs.GTEx Analysis.This section describes colocalization analysis performed with eCAVIAR. This analysis was performed using TCGA and GTEx gene expression data. This analysis requires four input files: (1) GWAS summary statistics, (2) eQTL summary statistics, (3) LD matrix computed with GWAS data, and (4) LD matrix computed with genotype data used for eQTL analysis.Colocalization with eCaviar and manual curation of the expanded region.Direct link: Timing: 1\u00a0week35.Download VCF germline file see \u201cke\u201d. Sub-se36.Download annotations of curated Pathogenic and Likely Pathogenic Cancer Predisposition Variants from , which 37.a.Extract unique variants from \u201cTable\u00a0S2\u201d from and writb.bcftools view -r $i PCA.r1.TCGAbarcode.merge.tnSwapCorrected.10389.vcf.gz -O b -o CHR\"$i\".bcf.gzSplit downloaded vcf file per chromosome as follows, where \u201ci\u201d is the chromosome number:c.bcftools norm -Ou -m -any CHR\"$i\".bcf.gz | bcftools norm -Ou -f human_g1k_v37.fasta | |bcftools --missing-to-ref | bcftools annotate -Ob -x ID -I\u00a0+'%CHROM:%POS:-:%ALT' -O z -o CHR\"$i\"_norm.vcf.gzLeft normalize variants:d.bcftools\u00a0+setGT CHR\"$i\"_norm.vcf.gz -O z -o CHR\"$i\"_norm_nomissing.vcf.gz -- -t . -n 0pReplace missing genotypes by homozygous reference:e.plink --vcf CHR\"$i\"_norm_nomissing.vcf.gz --keep-allele-order --vcf-idspace-to _ --const-fid --allow-extra-chr 0 --split-x 2699520 154931044 no-fail --make-bed --out CHR\"$i\"_norm_nomissingConvert vcf files to PLINK formatted files:f.plink --bfile CHR\"$i\"_norm --extract range list_snps.txt --recode A --out OUT$i --allow-no-sexExtract the variant list prepared in step 37a and recode them mutations additively:Annotate per-sample mutations using position information available in \u201cTable\u00a0S2\u201d from .a.Extrac38.Collapse mutations into genes .39.Collapse genes into mutually exclusive pathways , run lib.For phenotypes with skewed distribution, dichotomize values as low vs high , run loPerform pan-cancer analysis.45.a.Repeat steps 44a\u2013b, without including cancer type as covariate.Peform per-cancer analysis.46.Generate outcome summary: exome files related to samples for which all the covariates and at least one immune trait was available should result in a master file of N\u00a0= 9,138 samples. There will be 832 pathogenic/likely pathogenic SNPs/Indels events with at least one copy of rare allele in the whole exome sequencing data, corresponding to 586 distinct pathogenic SNPs/Indels mapping to 99 genes. The regression analysis provides p-values and beta coefficients of the association with immune traits.Note: Scripts and code description used in this section are available at: https://github.com/rwsayaman/TCGA_PanCancer_Immune_Genetics.This section includes a workflow to assess the contribution of rare cancer predisposition variants on different immune traits.Rare Variant Analysis.Direct link: The analysis protocols described above each yield data output files, as described above. Expected output files are summarized here. Data visualization tools enable researchers to explore these results interactively.https://cnsgenomics.com/software/gcta/#GREMLanalysis. The combined results table include the ratio of genetic variance to phenotypic variance (Vg/Vp) estimate and SE; the likelihood-ratio test (LRT) p-value and sample size (n)\u00a0for each immune trait; and the FDR p-value across all immune traits.Output files from GCTA GREML is an .hsq file. For complete description of output variables, see: After conducting heritability analysis across 139 immune traits, we identified 10 immune traits with significant heritability (FDR p\u00a0<\u00a00.05), and 23 other traits with nominally significant heritability (p\u00a0<\u00a00.05) in at least one ancestry group. Within the European ancestry group, 28 traits had at least nominally significant heritability .https://www.cog-genomics.org/plink/1.9/formats#assoc_linear). After optional SNP annotation, addition columns include: the rsID and IUPAC nucleotide ambiguity codes from \u201cSNPlocs.Hsapiens.dbSNP144.GRCh37\u201d; the Genotyped, Rsq, AvgCall, MAF columns from Minimac3 HRC imputation information file; the recalculated MAF for the GWAS samples ; nearest genes to SNP of interest ; VEP annotation.The output from performing GWAS in PLINK consist of .assoc.linear (or .assoc.logistic) files with the following columns: chromosome code (CHR), variant identifier (SNP), base-pair coordinate (BP), allele 1 (A1), allele 2 (A2), test identifier (TEST) number of observations with nonmissing genotype, phenotype, and covariates (NMISS), regression coefficient , t-statistic (STAT) and asymptotic p-value for t-statistic (P). Note, running the PLINK command with parameter --keep-allele-order forces the original A1/A2 allele encoding and A1 should be the minor allele as originally encoded. PLINK output files are further described here: (\u22128) associations at 23 loci for 10 immune traits. We also identified an additional 1,196 suggestive (p\u00a0<\u00a01\u00a0\u00d7\u00a010\u22126) associations for 33 traits .The output table includes all genome-wide and suggestively significant SNPs annotated with the mapped epigenetic state from the Roadmap Epigenomics Project Expanded 18-state model. Each epigenome ID/epigenome represent a column with entries designating the epigenetic state at the SNP chromosome and base pair position .Output files for eQTL will have the following: chromosome, position, gene name, distance from gene, pan-cancer sample size, pan-cancer effect size, pan-cancer p-value, and then the same information repeated for each cancer type.Output file for sQTL will contain association results as follows: chromosome, position, ensemble ID, gene name, splicing event ID.10 QTL p-value of the index SNP and counter SNP, and difference between these values (delta Counter SNP-Index SNP); and the curated expanded range colocalization evidence assessment .Exome files related to samples for which all the covariates and at least one immune trait was available should result in a master file of N\u00a0= 9,138 samples. There will be 832 pathogenic/likely pathogenic SNPs/Indels events with at least one copy of rare allele in the whole exome sequencing data, corresponding to 586 distinct pathogenic SNPs/Indels mapping to 99 genes. The regression analysis provides p-values and beta coefficients of the association with immune traits .https://www.cri-iatlas.org/), in the \"Germline Analysis\" module (https://pheweb-tcga.qcri.org/) was setup to visualize GWAS summary statistics of all tested immune traits is prepopulated with data and results from .The\u00a0CRI iAtlas\u00a0\"Germline\u00a0Analysis\"\u00a0module p-values. The GWAS section provides visualization of significant\u00a0GWAS hits (p\u00a0<\u00a010\u22126), and colocalization results with colocalization posterior probability (CLPP) > 0.01. See tutorial visualizes GWAS summary statistics of all tested immune traits . SNP significance can be also shown for a specific SNP across all tested traits. External resources for SNPs can be accessed from PheWeb .This protocol and related scripts are tailored for the analyses of matched genomic and immune trait datasets in TCGA but can be applied to other datasets with similar structures. The majority of analyzed immune traits are derived from gene expression data and can be adapted to other studies. Code used for the generation of immune traits from gene expression data are provided, and have been previously applied to RNA-sequencing and micrBelow we reported specific limitations related to each step of the protocol.Immune Traits: Immune signatures lack cancer-specific cell type resolution. The majority of immune traits were calculated based on specific gene sets from expression data (RNA-sequencing) from bulk tissue to generate estimates of immune cell activation or abundance using different enrichment or deconvolution techniques. Caution should be exercised when interpreting results in the context of specific tumors or cell types. However, many of these signatures were validated in specific tissues and cancers via FACS sorting or immunohistochemistry/immunofluorescence imaging of immune populations.Many of the immune signatures are highly correlated and not independent measures. Interpretation of results should be considered in context of functional modules defined as clusters of highly correlated signatures. Distribution of immune trait values in the dataset should be considered, and transformation of the data should be performed as needed to approximate a normally distributed set. Traits with a high fraction of zero values should be considered for dichotomization.Heritability Analysis: For heritability estimates run via GCTA GREML, the GCTA FAQ (https://cnsgenomics.com/software/gcta/#FAQ) states that at least 3,160 samples from unrelated individuals are needed to get estimates with standard errors (SEs) down to 0.1 for common SNPs. Only the European ancestry group meets this criterion. Nonetheless, heritability estimates were run in the smaller sized ancestry groups with expectation of large SEs to provide preliminary analyses of immune traits in ancestry groups that are not well studied or sampled.Heritability analysis takes into account only common variants. In this protocol, we used MAF\u00a0>\u00a01% as cut-off. Contribution of rare variants are not accounted for and may explain \u201cmissing\" heritability.GWAS: Linear regression assumes the residuals are normally distributed. Immune traits with skewed distributions were first log10 transformed, those assessed to have close to normal distribution were used as continuous variables. However, some immune traits remained with very skewed distributions due to a high fraction of 0 values, these traits were converted to binary 0 and 1 values based on the median value and logistic regression was performed instead (see \u201cTable\u00a0S2\u201d (able\u00a0S2\u201d ).GWAS was run pan-cancer on the non-hematologic cancers in TCGA which vary in cohort size from 36 (CHOL) to 999 (BRCA). Results may be representative of the most common cancers in TCGA. Post hoc analysis evaluation of associations per cancer (forest plots) can provide insight in the directionality of the betas per cancer and identify potential outliers.2\u00a0\u2265 0.5 and MAF \u2265 0.5% as cut-offs for inclusion. The HRC panel (version 1) consists of 64,976 haplotypes at >39M SNPs constructed from 20 whole genome sequencing studies , per-cancer analysis might reveal additional cancer-specific hits, which might be compared with the ones in the related tissue. Because of the relatively limited number of samples available for each cancer type, per-cancer GWAS and colocalization should be preferentially performed by combining additional cancer-specific sources containing both phenotypic and genotypic data beyond TCGA.Colocalization: Our protocol performed colocalization within a 200 SNP window (+/- 100 from the index SNP). In some cases, we observed that there were SNPs outside of that window that had better association with gene expression but weaker or no association with the immune trait. Therefore, we also performed a manual inspection of the entire locus . We only performed this expanded region analysis of colocalization if there was plausible evidence of colocalization (eCAVIAR CLPP > 0.01) for the 200 SNP window. This expanded region analysis was intended to provide a more stringent criterion.10 p-value of association with the immune trait for each SNP in the region on the X-axis and the negative log10 p-value for association with the relevant gene expression or splicing event . If we found additional SNPs outside of the 200 BP window and they demonstrated a stronger effect for association with gene expression or the sQTL and weaker association with the index SNP, we developed additional criteria for colocalization. If we identified one or more SNPs outside of the window that had a \u2013log10 p-value with expression or splicing that was > 1.5 than the index SNP, we considered that as negative evidence for colocalization. If the SNP(s) with stronger evidence for eQTL or sQTL association had a \u2013log10 p-value that was \u22641.5 compared with the index SNP, we considered the evidence for colocalization as \u201cintermediate.\u201d Finally, if we found no other SNP in the entire region with strong eQTL or sQTL that had a better p-value than the index SNP and the eCAVIAR results gave a posterior probability of colocalization of > 0.01, we considered the evidence to be \u201cstrong.\u201dTo perform these, we plotted the negative logWhile we used this manual second step in addition to eCAVIAR, an alternative approach would have been to use COLOC . This paWithin the TCGA dataset, colocalization was performed on a pan-cancer level. This analysis can be conducted on each cancer type separately, assuming that GWAS and eQTL analyses are also performed for each cancer type separately. Low sample size can be a limiting factor for the per-cancer analysis.Colocalization is based on gene-expression data from TCGA and GTEx\u00a0but can also be performed in different datasets that might be available.Rare Variant Analysis: In this analysis, we focused on variants occurring in cancer-predisposition genes, as previously annotated by . Different aggregation in functional categories might be defined by the users. Heterogeneity of germline calls and batch effect prevented us to run a comprehensive exome-wide analysis as previously defined \u00a0<\u00a00.05%) .Cannot load software or run scripts on the high-performance compute server. Implementation of provided GitHub code produces error see \u201c\u201d.Consult your institution\u2019s IT or compute cluster administrator for proper installation of necessary software including all needed libraries based on the high-performance compute environment. Ensure that the proper software versions, including all libraries and dependencies, are installed. Software implementation may be version specific, the versions used in the protocol are provided to ensure reproducibility.\u2022https://cnsgenomics.com/software/gcta.For troubleshooting of heritability analysis in GCTA GREML, see: \u2022https://www.cog-genomics.org/plink/.For troubleshooting of GWAS in PLINK, see: \u2022https://github.com/CRI-iAtlas/iatlas-app.For issues with installation of CRI iAtlas, see troubleshooting guide on the software website: Provided code should be considered as a guide. Adjust parameters based on cluster capabilities and specifications. Job submission scripts are dependent on the resource allocation management system. E.g., the provided GitHub codes for heritability analysis and GWAS were optimized for the high-performance compute environment at University of California, San Francisco employing Portable Batch System (PBS) job scheduling; consult your system administrator to adapt the provided code to your system.https://gdc.cancer.gov/about-data/publications/CCG-AIM-2020 but the GTEx data were in Build 38 (GRCh38) (see \"https://genome-store.ucsc.edu/) so these two data sets can be compared.We converted the genomic coordinates in TCGA from GRCh37 to GRCh38 using liftOver the SNP in question was not present in the 1000 Genomes Project data which warwsayaman@gmail.com.Further information and requests for resources and reagents should be directed to and will be fulfilled by the lead contact, Rosalyn Sayaman, This study did not generate new unique reagents."} +{"text": "Permanent magnet synchronous motors (PMSM) are widely used in industry applications such as home appliances, manufacturing process, high-speed trains, and electric vehicles. Unexpected faults of PMSM are directly related to the significant losses in the engineered systems. The majority of motor faults are bearing fault and stator fault . This article reports vibration and driving current dataset of three-phase PMSM with three different motor powers under eight different severities of stator fault. PMSM conditions including normal, inter-coil short circuit fault, and inter-turn short circuit fault in three motors are demonstrated with different powers of 1.0 kW, 1.5 kW and 3.0 kW, respectively. The PMSMs are operated under the same torque load condition and rotating speed. Dataset is acquired using one integrated electronics piezo-electric (IEPE) based accelerometer and three current transformers (CT) with National Instruments (NI) data acquisition (DAQ) board under international organization for standardization standard (ISO 10816-1:1995). Established dataset can be used to verify newly developed state-of-the-art methods for PMSM stator fault diagnosis. Mendeley Data. DOI: 10.17632/rgn5brrgrn.5 Specifications Table\u00b7 This dataset is acquired from three motors with different powers of 1.0 kW, 1.5 kW, and 3.0 kW, respectively. Two different types of faults including inter-turn short circuits and inter-coil short circuits are seeded. This dataset consists of vibration data which represent the shock of the bearing due to torque unbalance by motor stator faults; and, current data which represent the changes of motor driving power.\u00b7 This dataset is collected according to ISO guideline (ISO 10816-1:1995). Stator faults are artificially seeded using bypassing resistances allocated in inter-coil circuits and inter-turn circuits. Three motors with different powers of 1.0 kW, 1.5 kW, and 3.0 kW are tested under the same experimental setup including rated rotating speed (3000 RPM), rated load condition , sensor location, and their sampling rates .\u00b7 Considering dataset for fault diagnosis often requires large amounts of effort and time consumption, this dataset can provide a useful dataset in the fault diagnosis research field 1This dataset was established for deep learning based motor fault diagnosis research. Unlike other studies, it is very difficult to obtain data in the fault diagnosis research field because it is difficult to apply an actual failure. Therefore, there are many difficulties in training of deep learning algorithm. To solve this problem, we simulated motor stator faults according to the motor power, and obtained vibration and driving current data according to the severity of their faults. This dataset is measured based on mechanical engineering knowledge in accordance with ISO international standards. This dataset used for verification of the deep learning based fault diagnosis method.2Collected dataset consists of vibration and current data acquired from the three PMSMs with different powers of 1.0 kW, 1.5 kW and 3.0 kW. In each motor, total 16 stator faults are seeded with 8 inter-coil circuit faults and 8 inter-turn circuit faults. The motors rotate at a rated rotating speed of 3000 RPM and rated load condition . The collected dataset is stored in technical data management streaming (TDMS) files. TDMS file format can be accessed easily with other data analysis program such as MATLAB 2). The vibration data file includes z-direction of PMSM for inter-turn short circuit and inter-coil short circuit. The description of the vibration files as per operating and health conditions of the motor are provided as follows:1.1000W_0_00_vibration_interturn.tdms: This file includes healthy inter-turn short circuit vibration data in z-direction acquired from the motor whose power is 1.0 kW.2.1000W_2_26_vibration_interturn.tdms: This file includes inter-turn short circuit fault vibration data in z-direction with 2.26 % severity acquired from the motor whose power is 1.0 kW.3.1000W_2_70_vibration_interturn.tdms: This file includes inter-turn short circuit fault vibration data in z-direction with 2.70 % severity acquired from the motor whose power is 1.0 kW.4.1000W_3_35_vibration_interturn.tdms: This file includes inter-turn short circuit fault vibration data in z-direction with 3.35 % severity acquired from the motor whose power is 1.0 kW.5.1000W_4_41_vibration_interturn.tdms: This file includes inter-turn short circuit fault vibration data in z-direction with 4.41 % severity acquired from the motor whose power is 1.0 kW.6.1000W_6_48_vibration_interturn.tdms: This file includes inter-turn short circuit fault vibration data in z-direction with 6.48 % severity acquired from the motor whose power is 1.0 kW.7.1000W_12_17_vibration_interturn.tdms: This file includes inter-turn short circuit fault vibration data in z-direction with 12.17 % severity acquired from the motor whose power is 1.0 kW.8.1000W_21_69_vibration_interturn.tdms: This file includes inter-turn short circuit fault vibration data in z-direction with 21.69 % severity acquired from the motor whose power is 1.0 kW.9.1000W_0_00_vibration_intercoil.tdms: This file includes healthy inter-coil short circuit vibration data in z-direction acquired from the motor whose power is 1.0 kW.10.1000W_0_68_vibration_intercoil.tdms: This file includes inter-coil short circuit fault vibration data in z-direction with 0.68 % severity acquired from the motor whose power is 1.0 kW.11.1000W_0_81_vibration_intercoil.tdms: This file includes inter-coil short circuit fault vibration data in z-direction with 0.81 % severity acquired from the motor whose power is 1.0 kW.12.1000W_1_01_vibration_intercoil.tdms: This file includes inter-coil short circuit fault vibration data in z-direction with 1.01 % severity acquired from the motor whose power is 1.0 kW.131000W_1_34_vibration_intercoil.tdms: This file includes inter-coil short circuit fault vibration data in z-direction with 1.34 % severity acquired from the motor whose power is 1.0 kW.14.1000W_2_00_vibration_intercoil.tdms: This file includes inter-coil short circuit fault vibration data in z-direction with 2.00 % severity acquired from the motor whose power is 1.0 kW.15.1000W_3_93_vibration_intercoil.tdms: This file includes inter-coil short circuit fault vibration data in z-direction with 3.93 % severity acquired from the motor whose power is 1.0 kW.16.1000W_7_56_vibration_intercoil.tdms: This file includes inter-coil short circuit fault vibration data in z-direction with 7.56 % severity acquired from the motor whose power is 1.0 kW.17.1500W_0_00_vibration_interturn.tdms: This file includes healthy inter-turn short circuit vibration data in z-direction acquired from the motor whose power is 1.5 kW.18.1500W_1_57_vibration_interturn.tdms: This file includes inter-turn short circuit fault vibration data in z-direction with 1.57 % severity acquired from the motor whose power is 1.5 kW.19.1500W_1_88_vibration_interturn.tdms: This file includes inter-turn short circuit fault vibration data in z-direction with 1.88 % severity acquired from the motor whose power is 1.5 kW.20.1500W_2_34_vibration_interturn.tdms: This file includes inter-turn short circuit fault vibration data in z-direction with 2.34 % severity acquired from the motor whose power is 1.5 kW.21.1500W_3_10_vibration_interturn.tdms: This file includes inter-turn short circuit fault vibration data in z-direction with 3.10 % severity acquired from the motor whose power is 1.5 kW.22.1500W_4_57_vibration_interturn.tdms: This file includes inter-turn short circuit fault vibration data in z-direction with 4.57 % severity acquired from the motor whose power is 1.5 kW.23.1500W_8_74_vibration_interturn.tdms: This file includes inter-turn short circuit fault vibration data in z-direction with 8.74 % severity acquired from the motor whose power is 1.5 kW.24.1500W_16_08_vibration_interturn.tdms: This file includes inter-turn short circuit fault vibration data in z-direction with 16.08 % severity acquired from the motor whose power is 1.5 kW.25.1500W_0_00_vibration_intercoil.tdms: This file includes healthy inter-coil short circuit vibration data in z-direction acquired from the motor whose power is 1.5 kW.26.1500W_4_79_vibration_intercoil.tdms: This file includes inter-coil short circuit fault vibration data in z-direction with 4.79 % severity acquired from the motor whose power is 1.5 kW.27.1500W_5_70_vibration_intercoil.tdms: This file includes inter-coil short circuit fault vibration data in z-direction with 5.70 % severity acquired from the motor whose power is 1.5 kW.28.1500W_7_02_vibration_intercoil.tdms: This file includes inter-coil short circuit fault vibration data in z-direction with 7.02 % severity acquired from the motor whose power is 1.5 kW.29.1500W_9_15_vibration_intercoil.tdms: This file includes inter-coil short circuit fault vibration data in z-direction with 9.15 % severity acquired from the motor whose power is 1.5 kW.30.1500W_13_12_vibration_intercoil.tdms: This file includes inter-coil short circuit fault vibration data in z-direction with 13.12 % severity acquired from the motor whose power is 1.5 kW.31.1500W_23_20_vibration_intercoil.tdms: This file includes inter-coil short circuit fault vibration data in z-direction with 23.20 % severity acquired from the motor whose power is 1.5 kW.32.1500W_37_66_vibration_intercoil.tdms: This file includes inter-coil short circuit fault vibration data in z-direction with 37.66 % severity acquired from the motor whose power is 1.5 kW.33.3000W_0_00_vibration_interturn.tdms: This file includes healthy inter-turn short circuit vibration data in z-direction acquired from the motor whose power is 3.0 kW.34.3000W_1_78_vibration_interturn.tdms: This file includes inter-turn short circuit fault vibration data in z-direction with 1.78 % severity acquired from the motor whose power is 3.0 kW.35.3000W_2_13_vibration_interturn.tdms: This file includes inter-turn short circuit fault vibration data in z-direction with 2.13 % severity acquired from the motor whose power is 3.0 kW.36.3000W_2_65_vibration_interturn.tdms: This file includes inter-turn short circuit fault vibration data in z-direction with 2.65 % severity acquired from the motor whose power is 3.0 kW.37.3000W_3_50_vibration_interturn.tdms: This file includes inter-turn short circuit fault vibration data in z-direction with 3.50 % severity acquired from the motor whose power is 3.0 kW.38.3000W_5_16_vibration_interturn.tdms: This file includes inter-turn short circuit fault vibration data in z-direction with 5.16 % severity acquired from the motor whose power is 3.0 kW.39.3000W_9_81_vibration_interturn.tdms: This file includes inter-turn short circuit fault vibration data in z-direction with 9.81 % severity acquired from the motor whose power is 3.0 kW.40.3000W_17_86_vibration_interturn.tdms: This file includes inter-turn short circuit fault vibration data in z-direction with 17.86 % severity acquired from the motor whose power is 3.0 kW.41.3000W_0_00_vibration_intercoil.tdms: This file includes healthy inter-coil short circuit vibration data in z-direction acquired from the motor whose power is 3.0 kW.42.3000W_2_49_vibration_intercoil.tdms: This file includes inter-coil short circuit fault vibration data in z-direction with 2.49 % severity acquired from the motor whose power is 3.0 kW.43.3000W_2_98_vibration_intercoil.tdms: This file includes inter-coil short circuit fault vibration data in z-direction with 2.98 % severity acquired from the motor whose power is 3.0 kW.44.3000W_3_69_vibration_intercoil.tdms: This file includes inter-coil short circuit fault vibration data in z-direction with 3.69 % severity acquired from the motor whose power is 3.0 kW.45.3000W_4_86_vibration_intercoil.tdms: This file includes inter-coil short circuit fault vibration data in z-direction with 4.86 % severity acquired from the motor whose power is 3.0 kW.46.3000W_7_12_vibration_intercoil.tdms: This file includes inter-coil short circuit fault vibration data in z-direction with 7.12 % severity acquired from the motor whose power is 3.0 kW.47.3000W_13_10_vibration_intercoil.tdms: This file includes inter-coil short circuit fault vibration data in z-direction with 13.10 % severity acquired from the motor whose power is 3.0 kW.48.3000W_23_48_vibration_intercoil.tdms: This file includes inter-coil short circuit fault vibration data in z-direction with 23.48 % severity acquired from the motor whose power is 3.0 kW.Vibration data were measured using accelerometer (PCB352C34) and acquired using NI9234 module for 120 seconds with sampling frequency of 25.6 kHz. Each vibration data file contains two columns namely \u2018Time Stamp\u2019, and \u2018amplitude\u2019. The unit of the vibration amplitude is \u2018gravitational constant (g)\u2019 \u2019. The description of the current files as per operating and health conditions of the motor are provided as follows:33.1PMSM testbed was built to measure the vibration and current data of healthy state and faulty states from PMSMs with different powers as shown in 3.2Failure modes of the motor stator can be categorized into: first, the increase of the resistance of the stator (open-like fault); or second, the decrease of the resistance of the stator (short-like fault). In case of open-like fault, a part of the state coil is damaged increasing the stator resistance so that driving current decreased proportional to the resistance change for given input voltage. Hence this type of failure is relatively easy to detect by reading the overall decrease of the current. On the other hand, most of difficult failures come from the second failure mode such that part of stator coil is damaged to have a short-like circuit between turns ; or between coils . These type of short circuit faults introduce a bypassing path of the driving current so that the normal current flowing through the stator coil is reduced by the Kirchhoff's law. As result, the motor experiences reduction of electro-magnetic field in turn the induced torque as well. Considering these types of short circuit faults, we seeded inter-turn short circuit fault and inter-coil short circuit fault controlling the bypass resistances in the short circuits as follows.Rbypass is the bypassing resistance values, and the R is stator resistance values.The smaller the bypass resistance value, the larger driving current flows to the bypass resistance, then the smaller driving current flows through the motor stator due to Kirchhoff's law. In this case, the severity of the motor's fault is regarded as high. Therefore, the fault was representatively seeded by adding a short circuit connecting turns (or coils) using the bypassing resistance to the motor winding as shown in Human Lab., Department of Mechanical Engineering, Korea Advanced Institute of Science and Technology, Daejeon, South Korea has given the consent that the datasets may be publicly-released as part of this publication. We declare that the manuscript adheres to Ethics in publishing standards and the submitted dataset is the real data recorded in the experiment, and there is no act of stealing other people's data or modifying data.Wonho Jung: Conceptualization, Methodology, Software, Validation, Visualization, Data curation, Writing \u2013 original draft, Writing \u2013 review & editing. Sung-Hyun Yun: Data curation. Yoon-Seop Lim: Investigation. Sungjin Cheong: Investigation. Yong-Hwa Park: Funding acquisition, Writing \u2013 review & editing, Supervision.The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper."} +{"text": "The saccadic pathway involves numerous regions of the brain cortex, the cerebellum and the brainstem. Saccadic movement latency, velocity and precision parameters assess the efficacy of central nervous system (CNS) control over rapid eye movements. Very few disorders which alter the CNS are missed when these parameters are carefully measured using a computer. Pendular tracking assesses the integrity of the oculomotor system in controlling slow eye movements - vulnerable to CNS and vestibular system dysfunctions. Optokinetic nystagmus represents a stereoceptive response which compensates environment movements by psycho-optical inputs.to compare the oculomotricity values found in children with and without learning complaints.prospective study. We included in the study 28 children of both genders, within the age range between 8 and 12 years, with learning disorders (study group) and 15 without (control group). We carried out the fixed and randomized saccadic movement tests, pendular tracking study and optokinetic nystagmus.There was a statistically significant difference between the groups concerning the randomized saccadic movement velocity parameters and in the pendular tracking test.The children with learning disorders presented alterations in some oculomotricity tests when compared to children without complaints. Saccade voluntary control abnormalities have been seen in development disorders such as: dyslexia; learning disorders; hyperactivity and attention deficit. The saccadic pathway involves numerous regions of the brain cortex, cerebellum and brainstem. Latency, velocity and saccadic movement precision parameters assess the central nervous system (CNS) control efficiency over rapid eye movements4Pendular tracking is a test which is strongly affected by the patient's attention span and collaboration. There may be cases of pendular tracking poorly formed in inattentive and non-cooperating patients or even in some elderly individuals, without it meaning a central disorderThe necessary eye movement for reading requires alternate saccade movements and fixation periods. It starts with a saccade that runs for 8 to 10 words mixed with periods of eye fixation and ends with a long saccade in order to start a new lineThe goal of the present study was to compare the eye movement parameters found in children with learning disorders - especially in reading and writing - with children without complaints.QUESTIONNAIRE TEMPLATEChild: ________________________________________________________________Current date: ____/____/ ________Parent/guardian: ________________________________________________________The questionnaire below aims at studying learning difficulties. Please, answer it by checking with a \u00d7 in the option you think correct and fill in the blanks when requested. The data hereby expressed serve research purposes (data computing) and will be kept confidential. It is very important that you return this form to us. The questionnaire must be returned on the day of the medical consultation.Does your child have or has had:Difficulties reading: yes no - Which? _____________________________Difficulties writing: yes no - Which? _____________________________Poor school performance: yes no Repeated a school year (s): yes no Why do you think this happened? ___________________________________________________________________________________________________________________________Problems of attention and concentration at school: yes no The population studied was made up of 43 children of both genders, of age range between 8 and 12 years, being 28 (study group) with diagnosis of learning disorders (reading and writing) and 15 belonging to the control group without any learning disorder. All the children, from both groups, did not wear glasses. We carried out fixed and randomized saccadic movement test, pendular tracking test in the frequencies of 0.20Hz, 0.40Hz and 0.80Hz and optokinetic nystagmus test. Equipment: digital vectonystagmophapher with the VECWIN software, a Neurograff Eletromedicina\u00ae bar of led. The parents/guardians received a questionnaire about learning-related symptoms. The analyses were automatically done by the software. As far as eye movements are concerned, we analyzed the following parameters: accuracy ; velocity ; latency and gain (measure the relationship between eye velocity and the stimulus velocity used in optokinetic and pendular tracking tests)As far as gender is concerned the control and study groups were homogeneous (p= 0.518). The distribution is on In our study we could notice that the mean values found in fixed saccade movements are within normal ranges in digital vectonystagmography in terms of accuracy, latency and velocityChildren with learning disorders have alterations in some eye movement tests when compared to \u201cnormal\u201d healthy children."} +{"text": "The complexity and volume of data associated with population-based cohorts means that generating health-related outcomes can be challenging. Using one such cohort, the UK Biobank\u2014a major open access resource\u2014we present a protocol to efficiently integrate the main dataset and record-level data files, to harmonize and process the data using an R package named \u201cukbpheno\u201d. We describe how to use the package to generate binary phenotypes in a standardized and machine-actionable manner.For complete details on the use and execution of this protocol, please refer to \u2022A protocol to efficiently process UK Biobank phenotypic data\u2022Reproducible and semi-automatic generation of binary health-related phenotypes\u2022Utilities with visualization to aid data exploration and phenotype definition\u2022Example analyses common among genetic epidemiological studies Publisher\u2019s note: Undertaking any experimental protocol requires adherence to local institutional guidelines for laboratory safety and ethics. The complexity and volume of data associated with population-based cohorts means that generating health-related outcomes can be challenging. Using one such cohort, the UK Biobank\u2014a major open access resource\u2014we present a protocol to efficiently integrate the main dataset and record-level data files, to harmonize and process the data using an R package named \u201cukbpheno\u201d. We describe how to use the package to generate binary phenotypes in a standardized and machine-actionable manner. The UK Biobank is a large-scale population cohort with in-depth collection of phenotypic and genetic data . While ihttps://github.com/niekverw/ukbpheno). The package contains functionalities to harmonize data of various sources in a consistent manner accompanied by structured metadata. To allow interactive exploration, the package contains multiple functionalities to visualize the data and it can be run on typical workstations (high-performance computing cluster is not required). In line with the FAIR principle , it is aIn this protocol, we show how to download and process the data from the UK Biobank as well as to generate health related phenotypes with the use of ukbpheno. More specifically, we utilize the ukbpheno package to first harmonize various data into single episode format and then generate the phenotypes . We thenTiming: 4\u201312 hNote: The UK Biobank data are an open access resource to any bona-fide researchers. The data analyzed in this protocol were obtained through application number 74395. Please refer to the following link to request access to UK Biobank data: https://www.ukbiobank.ac.uk/enable-your-research/apply-for-access.1.a.https://biobank.ctsu.ox.ac.uk/\u223cbbdatan/Accessing_UKB_data_v2.3.pdf) to obtain and decrypt the main dataset from the UK Biobank Data Showcase (Showcase) . Files needed in this step include:i.An MD5 Checksum to verify download .ii.A key file for decryption .iii.Utilities software \u201cukbmd5\u201d and \u201cukbunpack\u201d available on Showcase.Follow the steps in section b.ukbconv ukbxxxxx.enc_ukb docGenerate a metadata file (.html) of the decrypted main dataset (.enc_ukb) using utility \u201cukbconv\u201d (available on Showcase).c.ukbconv ukbxxxxx.enc_ukb rCRITICAL: Do not run the accompanied ukbxxxxx.R script generated at this step. Only the tab separated file (.tab) is required.Generate a tab-separated file (.tab) of the decrypted main dataset (.enc_ukb) using utility \u201cukbconv\u201dDownload and decrypt the main dataset.2.https://biobank.ndph.ox.ac.uk/showcase/).a.Click the \u201cLogin\u201d on the top of the webpage which redirects users to the Access Management System.b.Once logged in, select \u201cProjects\u201d on the left panel of the Access Management System and view the relevant project (\u201cView/Update\u201d).c.i.Once in the Showcase, move to \u201cDownloads\u201d tab on the top of the webpage.ii.In the \u201cDownloads\u201d page, researchers with access to record-level data will see a tab \u201cData Portal\u201d next to the tab \u201cDatasets\u201d.iii.Connect to the record repository in the \u201cData Portal\u201d tab.Inside the project, select the \u201cData\u201d tab to connect the Showcase in \u201clogged-in\u201d mode.d.i.Request access to record-level hospital inpatient data via Field 41259.ii.Request access to record-level primary care data via Field 42038, 42039 and 42040.iii.Request access to record-level death register via Field 40023.Download the complete data tables (.txt) through the \u201cTable\u00a0Download\u201d tab.Obtain the record-level data from Data Portal on Showcase inside an R session.install.packages(\"data.table\")install.packages(\"dplyr\")install.packages(\"ggplot2\")install.packages(\"ggforce\")install.packages(\"tableone\")install.packages(\"survminer\")install.packages(\"MatchIt\")install.packages(\"devtools\")devtools::install_github(\"niekverw/ukbpheno\")library(data.table)library(dplyr)library(ukbpheno)library(ggplot2)library(ggforce)library(tableone)library(survminer)library(\"MatchIt\")There is no hard requirement on R versions for ukbpheno. Results presented in this protocol were produced running with R version 4.0.3 with RStudio version 1.3.959 in a Unix system.Timing: 2\u20138 h1.https://github.com/niekverw/ukbpheno/tree/master/inst/extdata/data.settings.tsv.Download data setting file (data.settings.tsv) to the project directory from 2.https://github.com/niekverw/ukbpheno/tree/master/inst/extdata/definitions_DmRxT2.tsv.Download definition table template to the project directory from 3.a.i.Each code should be separated by a comma.ii.For code systems with hierarchical system (refer to data setting file), it is possible to fill in only the parent codes instead of specifying all codes.iii.Annotations of the codes can be made using curly bracket \u201c\u201d.Optional: We included a shiny app to cross-reference codes between systems using the mapping file provided by UK Biobank. (https://github.com/niekverw/ukbpheno/blob/master/inst/util/shiny.lookup_codes.R). Download the code map file (Excel workbook) provided by the UK Biobank (https://biobank.ndph.ox.ac.uk/showcase/refer.cgi?id=592): (1) locate the shiny app script and run the shiny app, and (2) visit the address returned in a web browser and use the app. A screenshot of the shiny app can be found in\u00a0For each of code systems e.g., diagnosis codes ICD10 or operation codes OPCS4 as well as codes used in the self-report fields, fill in the corresponding codes in the table.b.i.Fill in field number as Showcase followed by the condition e.g., \u201c6177=3(insulin)\u201dii.iii.Add the corresponding age of diagnosis using \u201c\u201d following the condition e.g., \u201c4041=1[2976]\u201dFill in fields with conditions in the \u201cTS\u201d (touchscreen) column.Fill in one phenotype (such as DmT2) per row. The column \u201cTRAIT\u201d contains the unique identifier of each phenotype which is case sensitive.4.a.\u201cStudy_population\u201d can be used to restrict definition on a subgroup of participants with specific phenotype.b.Participants with phenotypes in \u201cInclude_definition\u201d will be considered to be a case for the composite phenotype.c.Users may use the \u201cExclude_from_cases\u201d and \u201cExclude_from_controls\u201d column to exclude participants with certain phenotype(s) from cases and controls respectively.It is possible to create a composite phenotype, which involves other phenotypes. Composite phenotypes are constructed using four columns in the definition table .a.\u201cStudyNote: For example, a composite phenotype \u201cdiabetes mellitus\u201d may include two phenotypes \u201ctype 1 diabetes\u201d and \u201ctype 2 diabetes\u201d. Alternatively, for the phenotype \u201ctype 2 diabetes\u201d we may want to exclude any cases with also a \u201ctype 1 diabetes\u201d diagnosis.5.a.Trait \u201cDmT2\u201d, \u201cDmT1\u201d and \u201cDmG\u201d contain specific codes for diabetes type 2, type 1 and gestational diabetes respectively;b.\u201cRxDm\u201d defines the antidiabetic medication which is further divided into \u201cRxDmIns\u201d (Insulin) and \u201cRxDmOr\u201d ;c.\u201cDm\u201d captures general codes for diabetes and the remaining definitions are used to differentiate between type 1 and type 2 diabetes within this group.The definition table template \u201cdefinitions_DmRxT2.tsv\u201d contains definitions constructed for the definition of type 2 diabetes in the UK Biobank.Health outcome information from various data sources / data fields within the main dataset is encoded differently. These relationships have been curated and recorded in the data setting file included in the ukbpheno package. For a target phenotype, survey the various data sources/\u00a0data fields on the Showcase and determine the definitions for the target phenotype in\u00a0UK Biobank. An example definition table to define type 2 diabetes is included in the package.\u00a0This example table can be used as a template for users to define their target health outcomes.Timing: 15\u00a0min6.Specify data file paths in R.# The directory with data filespheno_dir <-\"mydata/ukb99999/\"# Main datasetfukbtab <- paste# Metadata filefhtml <- paste# Hospital inpatient datafhesin <- pastefhesin_diag <- pastefhesin_oper <- paste# GP datafgp_clinical <- pastefgp_scripts <- paste# Death registryfdeath_portal <- pastefdeath_cause_portal <- paste# Participant withdrawal listf_withdrawal<-paste7.Specify files paths for the data setting file, the definition table and code maps which are included in the package (extdata/). Alternatively download the files from code repository of ukbpheno hosted at GitHub.# Or download the files fromhttps://github.com/niekverw/ukbpheno/tree/master/inst/extdata/#extdata_dir<-paste0,\"/\")fdefinitions <- paste0fdata_setting <- paste08.Read data setting file. The pre-curated data setting file specifies the characteristics of each data source which are taken into account in the data harmonization process.dfData.settings <- fread(fdata_setting)9.a.i.Code maps include all available codes.The function \u201cread_definition_table\u201d expands parent codes using the code maps and sort out codes relevant for inclusion and exclusion accordingly.b.i.A specific ICD10 code may not exist in the UK Biobank ICD10 code map as this code is not present in the data.ii.There may be typos.The function will also cross-check codes entered in the definition with the code maps and warn users of any non-matching codes e.g.,Run the \u201cread_definition_table\u201d function to process the definition table.dfDefinitions_processed_expanded<-read_defnition_tableOptional: Alternatively download the code maps from the UK Biobank Showcase or create them manually by extracting all unique codes from your data using \u201cget_all_exisiting_codes\u201d which generates flat-form code maps. Adjust the data setting file accordingly.# First input: file path to GP clinical table# Second input: corresponding column names from the .txt file# Third input: output file-pathget_all_exsiting_codes,c)Input files required by the package include data files from UK Biobank including the main dataset, the metadata file and optionally data tables from Data Portal; the completed definition table and data setting file.Timing: 15\u201345\u00a0minAt the harmonization step, we combine all the available data files from various sources and transform them to the format of clinical events to facilitate downstream analyses . For ind10.a.The \u201callow_missing_fields\u201d flag specifies whether field(s) required on the definition table but missing in the main dataset is allowed and ignored. If this flag is set to \u201cFALSE\u201d, the harmonization step will halt in case of any missing field.b.If the participant withdrawal list is provided, records of these individuals will be removed.Load, process and harmonize all data files using harmonized_ukb_data.Note: The function harmonized_ukb_data harmonizes all available data . Additionally, the function will check if all fields required on the definition table are present in the main dataset and inform the user if any field is missing.lst.harmonized.data<-harmonize_ukb_dataNote: Time required to harmonize the data is dependent on size of the files. Factors that should be taken into considerations include number of fields approved for the particular project, number of participants included as well as if record-level primary care data are present.11.View(lst.harmonized.data)a.i.View(lst.harmonized.data$lst.data)Diagnosis codes without associated actual event date will have date of visit to assessment center (such as self-report diabetes) in the \u201ceventdate\u201d column and \u201c0\u201d in the \u201cevent\u201d indicating that the date does not reflect a true event .View definition, the harmonized data tables, the data settings and the individuals to be included (either specified by a vector of participant identifiers or a data-frame containing identifier in the first column and reference dates in the second column).# 1) definition of the target trait \u201cType 2 diabetes\u201dtrait<-\"DmRxT2\"# 2) harmonized data table - lst.harmonized.data# 3) data setting data-frame - dfData.settings# 4) individuals specified via df_reference_date# Here the dates of baseline visit (f.53.0.0) are taken as referencedf_reference_dt_v0<-lst.harmonized.data$dfukb13.lst.DmRxT2.case_control <- get_cases_controls(definitions=dfDefinitions_processed_expanded %>% filter(TRAIT==trait), lst.harmonized.data$lst.data,dfData.settings, df_reference_date=df_reference_dt_v0)View(lst.DmRxT2.case_control)a.\u201cdf.casecontrol\u201d is a data.table object of 16 columns providing summary of the diagnosis per participant . Includeb.\u201call_event_dt.Include_in_cases\u201d is data.table object including all event episodes supporting the diagnosis for the cases included .Table\u00a05Cc.\u201call_event_dt.Include_in_cases.summary\u201d is a data.table object with the same format with \u201cdf.casecontrol\u201d but includes only cases (both included and excluded case).Use \u201cget_cases_controls\u201d function to obtain the case/control status. The function returns a list of three data.table objects: \u201cdf.casecontrol\u201d, \u201call_event_dt.Include_in_cases\u201d and \u201call_event_dt.Include_in_cases.summary\u201d .lst.DmRx14.Generate timeline plot to check the relative contribution by various data sources over time . Events DmRxT2_timeline<-plot_disease_timeline_by_source(definition=dfDefinitions_processed_expanded%>%filter(TRAIT==trait),lst.harmonized.data$lst.data,dfData.settings, df_reference_dt_v0$identifiers)DmRxT2_timeline15.Use \u201cmake_upsetplot\u201d to examine the overlaps between the data sources at baseline to gain insight on their relationships .Figure\u00a09upset_plot<-make_upsetplot(definition=dfDefinitions_processed_expanded%>%filter(TRAIT==trait),lst.harmonized.data.gp$lst.data,dfData.settings,df.reference.dates\u00a0= df_reference_dt_v0)upset_plot16.Generate summary descriptions on the events with \u201cget_stats_for_events\u201d. For example, generation of a frequency plot of codes among all events from secondary care may help verify or refine the definition .Figure\u00a01# Extract all hospital admission recordsall_DmRxT2_evnt<-lst.DmRxT2.case_control$all_event_dt.Include_in_casesDmRxT2_hesin_rec<-all_DmRxT2_evnt[grepl ]# Get some descriptive statistics on the records on a code levelhesin_stats<-get_stats_for_events(DmRxT2_hesin_rec)hesin_stats$stats.codes.summary.phesin_stats$stats.codes.summary.p17.Explore secondary care code count by individual .Figure\u00a01# Get some summary statistics on the records on individual levelDmRxT2_rec_cnt<-DmRxT2_hesin_recmax(DmRxT2_rec_cnt$count)median(DmRxT2_rec_cnt$count)mean(DmRxT2_rec_cnt$count)quantile(DmRxT2_rec_cnt$count)# Visualize count with barplot with a zoom-in on count between 0-50ggplot2::ggplot)\u00a0+\u00a0ggplot2::geom_bar(fill=\"#0073C2FF\")\u00a0+ ggplot2::xlab(\"Number fo secondary care record per person\")\u00a0+\u00a0ggplot2::ylab(\"Frequency\")\u00a0+ #theme with white background\u00a0ggplot2::theme_bw\u00a0+ ggplot2::theme(text\u00a0= ggplot2::element_text(size=22),panel.grid.minor\u00a0=ggplot2::element_blank,panel.grid.major\u00a0=ggplot2::element_blank)\u00a0+ ggforce::facet_zoom)18.Generate a timeline of the codes contributing to diagnosis for a particular individual (please replace the identifier if copied from the cell below) .Figure\u00a01# Plot individual time lineplot_individual_timeline19.First identify participants with specific diabetes codes as well as general diabetes code.# Identify individuals with specific DmT2 codeslst.DmT2.case_control<-get_cases_controls(definitions=dfDefinitions_processed_expanded %>% filter(TRAIT==\"DmT2\"), lst.harmonized.data$lst.data,dfData.settings, df_reference_date=df_reference_dt_v0)# Identify individuals with specific DmT1 codeslst.DmT1.case_control<-get_cases_controls(definitions=dfDefinitions_processed_expanded %>% filter(TRAIT==\"DmT1\"), lst.harmonized.data$lst.data,dfData.settings, df_reference_date=df_reference_dt_v0)# Identify individuals with DmGlst.DmG.case_control <- get_cases_controls(definitions=dfDefinitions_processed_expanded %>% filter(TRAIT==\"DmG\"), lst.harmonized.data$lst.data,dfData.settings, df_reference_date=df_reference_dt_v0)# Identify individuals with general diabetes diagnosis codes excl. medicationlst.Dm.case_control <- get_cases_controls(definitions=dfDefinitions_processed_expanded %>% filter(TRAIT==\"Dm\"), lst.harmonized.data$lst.data,dfData.settings, df_reference_date=df_reference_dt_v0)20.Identify use of different anti-diabetic medications. Find individuals on metformin likely due to\u00a0diseases other than diabetes by cross checking with the list of individuals with diabetes diagnoses.# Identify individuals with metformin uselst.RxMet.case_control <- get_cases_controls(definitions=dfDefinitions_processed_expanded %>% filter(TRAIT==\"RxMet\"), lst.harmonized.data$lst.data,dfData.settings, df_reference_date=df_reference_dt_v0)#Identify use of insulin/oral diabetic med. excl. metforminlst.RxDmNoMet.case_control <- get_cases_controls(definitions=dfDefinitions_processed_expanded %>% filter(TRAIT==\"RxDmNoMet\"), lst.harmonized.data$lst.data,dfData.settings, df_reference_date=df_reference_dt_v0)#Identify individuals that are on metformin but no diabetes codes#nor medication other than metforminRxMet_DmUnlikely<-setdiff)21.a.We identify these individuals via set operations of the relevant diagnoses.b.Inspect the records of these individuals for evidence of type 2 diabetes.Cross-examine various diagnoses. For example we want to get individuals with young onset diabetes but did not have records supporting a diagnosis of non-type 2 diabetes. Namely these individuals did not have evidence of type 1 diabetes nor gestational diabetes.# Identify individuals with self-report insulin <12\u00a0months post-diagnosislst.RxDmInsFirstYear.case_control<-get_cases_controls(definitions=dfDefinitions_processed_expanded %>% filter(TRAIT==\"RxDmInsFirstYear\"), lst.harmonized.data$lst.data,dfData.settings, df_reference_date=df_reference_dt_v0)# Identify young onset self reported diabetes (European origin)lst.SrDmYEw.case_control <- get_cases_controls(definitions=dfDefinitions_processed_expanded %>% filter(TRAIT==\"SrDmYEw\"), lst.harmonized.data$lst.data,dfData.settings, df_reference_date=df_reference_dt_v0)# identify young onset self reported diabetes (Caribbean African origin)lst.SrDmYSaCa.case_control <- get_cases_controls(definitions=dfDefinitions_processed_expanded %>% filter(TRAIT==\"SrDmYSaCa\"), lst.harmonized.data$lst.data,dfData.settings, df_reference_date=df_reference_dt_v0)# Individuals of young onset diabetesind_young_onset<- union# Individuals with evidence of other types of diabetes reportedind_RxInsFirstYear_DmT1_DmG<- union,lst.DmG.case_control$df.casecontrol[Hx==2]$identifier)# Young onset but no DM type 1/ gestational diabetes specific codes nor self report of insulin within first year of diagnosisinds_young_onset_possible_DmT2\u00a0<-setdiff# Check the records of these individualslst.DmRxT2.case_control$all_event_dt.Include_in_cases[identifier %in% inds_young_onset_probable_DmT2]To make the definition of the type 2 diabetes more precise, we may screen and exclude individuals with evidence of other types of diabetes as well as the use of metformin not due to diabetes.Timing: 15\u201330\u00a0min22.Read and process the definition file.# Read the definitions tablefdefinitions <- paste0dfDefinitions_processed_expanded<-read_defnition_table23.a.The metadata provides information such as data type of these fields.b.Extract age at assessment center visit (Field 21003), sex (Field 31), body mass index (Field 21001), glycated hemoglobin level (Field 30750), glucose level (Field 30740), self-report insulin use within the first year of diabetes diagnosis (Field 2986), UK Biobank assessment center visited (Field 54) and Date of attending assessment center (Field 53).Extract only the required fields from the main dataset using read_ukb_tabdata.# Extract clinical variables from the main dataset using read_ukb_tabdata# We need the metadata (.html) file for read_ukb_tabdatadfhtml <- read_ukb_metadata(fhtml)# Rename the identifier column in the metadatadfhtml$field.tab<-\"identifier\"# Age at assessment center visit, sex, BMI, HbA1c, glucose,insulin within 1 year of diagnosis,UK Biobank assessment center location, date of visitbaseline_fields<-c# Extract these variables from main datasetdfukb_baseline <- read_ukb_tabdatagc24.Generate the phenotypes for atrial fibrillation, coronary artery disease, type 2 diabetes, hypertrophic cardiomyopathy, heart failure, hypertension and hyperlipidemia with a loop and merge the phenotype information into one table \u201cdfukb_baseline_pheno\u201d.# The target disease traits we will generate in batchdiseases<-c# Make an output folder to store the resultout_folder<-paste0if(!dir.exists(file.path(out_folder))){dir.create(file.path(out_folder))}df_withdrawal<-fread# remove withdrawn participantsdfukb_baseline_pheno<-dfukb_baseline[! identifier %in% df_withdrawal$V1]# Loop through the traits, including family history of related diseases and the diabetes medication usefor ){print(disease)lst.case_control <- get_cases_controls(definitions=dfDefinitions_processed_expanded %>% filter(TRAIT==disease), lst.harmonized.data$lst.data,dfData.settings, df_reference_date=df_reference_dt_v0)\u00a0# Add the trait to the column namescolnames(lst.case_control$df.casecontrol) <- paste, sep\u00a0= \"_\")\u00a0# Except for participant identifiernames(lst.case_control$df.casecontrol)[names(lst.case_control$df.casecontrol)\u00a0== paste]<-\"identifier\"# Merge these columns with dfukb_baseline_phenodfukb_baseline_pheno<-merge}This session demonstrates how to generate multiple phenotypes and make a clinical characteristics table with these phenotypes, stratified by type 2 diabetes status. An example definition table with the selected cardiometabolic diseases, family history of these diseases and diabetes medication usage is provided in the package. We additionally extract demographic information namely age and sex as well as biomarkers BMI, blood glucose, glycated hemoglobin and self-report insulin use within one year of diabetes diagnosis from the main dataset.Timing: 10\u00a0min25.Select variables to be reported in the clinical characteristics table. Rename the variables in the table to improve readability. Create the clinical characteristics table stratified by type 2 diabetes. Write the clinical characteristic table to a file.# Keep only the variables needed for the tabledfukb_baseline_pheno_fortable1<-dfukb_baseline_pheno# Negative first diagnosis day indicates history while positive indicates follow-up casesdfukb_baseline_pheno_fortable1$DmT2_0_first_diagnosis_years<-(-1\u2217dfukb_baseline_pheno_fortable1$DmT2_0_first_diagnosis_days)/365.25# Rename for readabilitycolnames(dfukb_baseline_pheno_fortable1)<-c# Below the parameters for CreateTableOne# The full variable listvars<-c# The categorical variables on the clinical characteristics tablefactorVars<-setdiff)# Create the clinical characteristic table stratified by type 2 diabetestableOne <- CreateTableOnehist(dfukb_baseline_pheno_fortable1$`Years since type 2 diabetes diagnosis`)tableOnetab1Mat <- print)# Save the table to a CSV filewrite.csv)In the following example analyses, we investigate the characteristics of participants with type 2 diabetes specific codes. We exclude the cases with type 1 diabetes diagnosis codes and we exclude any controls with non-specific diabetes codes .25.SelecTiming: 5\u00a0min26.a.The start time is the date when the participant visited the assessment center;b.Observed time is up to date of event or earliest among date of death and censoring date of hospital inpatient records (last follow up).With time-to-event data as well as the censoring dates for different data sources for different regions, compute the observed time for each participant.# Get death dates from datadeathdt<-unique])# Rename the column and mergecolnames(deathdt)<-cdfukb_baseline_pheno<-merge# HESIN censoring date are different by regions# Use the UK Biobank assessment center location attended by the participantsengland<-cscotland<-cwales<-c# Corresponding censoring datesdfukb_baseline_pheno<-as.Date(\"2021-03-31\")dfukb_baseline_pheno<-as.Date(\"2021-03-31\")dfukb_baseline_pheno<-as.Date(\"2018-02-28\")# Time-to-event/observed time is determined at earliest of date of event, date of death and censoring date of HESIN data (last follow up)# This is already calculated for those who have eventsrange# non-event but died before HESIN censoring datedfukb_baseline_pheno# People censored at last fu# non-event but died after censoring date (HESIN),dfukb_baseline_pheno# non-event and alive by censoring datedfukb_baseline_pheno27.Create the survival object and Kaplan-Meier plot for new onset heart failure stratified by type 2 diabetes status at baseline.#Estimate risk of new onset heart failure by presence/absence of type 2 diabetes at baselinefit<-survival::survfit \u223c DmT2_0_Hx, data\u00a0= dfukb_baseline_pheno[DmT2_0_Hx>0])# summary(fit)# Make Kaplan-Meier plotggsurvplot\",\u00a0censor.size=2,\u00a0palette\u00a0= c,\u00a0conf.int\u00a0= TRUE, # Add confidence interval\u00a0pval\u00a0= TRUE, # Add p-value\u00a0risk.table\u00a0= TRUE, # Add risk table\u00a0risk.table.col\u00a0= \"strata\", # Risk table color by groups\u00a0legend.labs\u00a0= c,\u00a0risk.table.height\u00a0= 0.2)Timing: 5\u00a0min28.To match type 2 diabetes case to control by age, sex and body mass index. We extract those variables and remove individuals with missing values in either the target phenotype or any covariates.######################################### 1:2 case control matching with MatchIt#########################################library(\u201cMatchIt\u201d)# Remove individuals with either missing or excluded phenotype for target phenotype (type 2 diabetes at baseline)df_to_matchit<-dfukb_baseline_pheno[!is.na(DmT2_0_Hx) & DmT2_0_Hx>0]# Pick three covariates age at assessment center visit, sex and BMI for matchingdf_to_matchit<-na.omit])29.Format the coding of the phenotype and name the rows by participant identifier in preparation for the matchit function. Run the matchit function to match 2 controls to each case. Examine the result.# Format the data for the matchit function# Control/case: 1/2 to 0/1df_to_matchit$DmT2_0_Hx<-df_to_matchit$DmT2_0_Hx-1# Name the rowsrownames(df_to_matchit)<-df_to_matchit$identifiercolnames(df_to_matchit)<-c# Run matchitm.dm2<-matchit#Check resultsummary(m.dm2)# Each row in the match.matrix shows identifier of one case with 2 matched controlsm.dm2$match.matrixSometimes it may be desirable to match cases and controls by characteristics such as age and sex\u00a0in certain studies. Here we demonstrate how to further process phenotypes created in ukbpheno to generate matched case-control pairs. We utilize an R package MatchIt for the matching task.After the harmonization step, the user should have in the R workspace the data from both Data Portal (record level data) as well as a subset of the main dataset relevant for the target phenotypes (as specified in the definition table). Users can perform further analyses in R as they see fit .Using our machine with 64 GB RAM (DDR4 2667 MHz) and 10-core processor (Intel Skylake) under an Ubuntu 16 system, it took 12\u00a0min to load and process the main dataset and all record level files. Of which the most time-consuming step was loading the main dataset (six minutes with file size 35.5 GB) followed by the primary care data . The harmonization step was additionally tested on an Ubuntu 16 machine with 16 GB RAM (DDR4 2667 MHz) and 6-core processor (AMD Ryzen 5). It took approximately 45\u00a0minutes to complete on this machine.The \u201cget_cases_controls\u201d function from ukbpheno determines the case/control status for a target phenotype following the inclusion/exclusion criteria outlined in the definition table. Users can extract prevalent as well as incident cases with a specific reference time point. Results presented in this protocol are produced using data downloaded in June 2021, with primary care data for\u00a0around 45% of the UK Biobank cohort available and data censoring dates shown in Following the example definition of \u201cDmRxT2\u201d, users should expect a prevalence of 4.7% at the baseline visit, an estimate close to . AdditioExamining the diabetes diagnoses captured in secondary care system, user should be able to\u00a0obtain similar frequency plots as shown in With the plot_individual_timeline function, user should expect visualization of diagnosis codes\u00a0of\u00a0a selected participant over time. Lastly in the cross-examinations with other types of diabetes and medication, users should expect similar numbers as reported in the prevalence algorithms by Eastwood and colleagues . Users cIn this part users should be able to generate the cardiometabolic phenotypes namely, atrial fibrillation or flutter, coronary artery disease, hypertrophic cardiomyopathy, heart failure, type 2 diabetes, hypertension and hyperlipidemia. Users should also obtain the related phenotypes including family history of diabetes, family history of heart disease, family history of hypertension and the use of diabetes medications. Using our machine with 64 GB RAM (DDR4 2667 MHz) and 10-core processor (Skylake), it took three minutes to process all 12 phenotypes.In this part users should obtain a clinical characteristics table stratified by type 2 diabetes status . The diap<0.0001 in the data analyzed.In the survival analysis on new-onset heart failure (outcome of interest) between participants with or without type 2 diabetes at baseline, user would obtain a Kaplan-Meier plot similar to Following up with the type 2 diabetes phenotype we created, it can be seen that case-control ratio is unbalanced. It can also be seen that the controls were younger, more likely to be female and had lower body mass indices compared to the cases . We usedPhenotyping of a heterogenous resource such as UK Biobank can be challenging due to multiple\u00a0data origins compounded with varying coverage both in terms of time and individuals. In the current protocol we demonstrate how to use the ukbpheno package to define health-related\u00a0outcomes and we presented possible analyses with regard to the phenotyping process.With the ukbpheno package, the case/control status is determined by presence of diagnosis records. In the case of contradictory records, an individual might be classified with theoretically mutually exclusive diagnoses. Ad-hoc analyses would be needed to refine the phenotypes of these individuals should such precision be required.It is also of note that, while the users can assign different weights to different data sources by adjusting minimum instance filter in the setting, it is not possible to directly weight on individual codes which could be important for a certain phenotype. The limitation could be circumvented by an implementation on the definition level (separating the codes in various definitions). It is important to recognize that there is no gold-standard for many of the health-related outcomes. Users should decide on their definitions based on the study questions at hand.Unable to download/decrypt the main dataset (step1).Ensure the most updated project key which is valid for one year after initiation.Unable to download data from Data Portal (step 2).Request the relevant fields listed in step 2 via the UK Biobank Access Management System .Error: Failed to install \u2018ukbpheno\u2019 from GitHub: installation of package \u201cxx\u201d had non-zero exit status (step 5).Installation of the dependent package \u201cxx\u201d was not successful. Find the error message on console, resolve the error and restart the installation. Possible reason of error includes \u201cdependency \u201cxx\u201d is not available (for R version x.x.x) \u201c, which may be solved by specifying the version of package compatible for the corresponding R version.Certain codes are missing after reading in the definition table (step 9).The package drops codes that are not available in data. If a certain code is dropped not due to this reason, check for special characters (accented characters are not recognized) and make sure codes are comma separated.Data processing step fails (step 10).Run \u201cget_all_varnames\u201d with the processed definition table and meta-data file to check if but some required fields are missing in the main dataset. Either remove the missing fields from definition table or allow missing fields in the \u201charmonized_ukb_data\u201d.m.w.yeung@umcg.nl).Further information and requests for resources should be directed to the lead contact, Ming Wai Yeung (This study did not generate new unique reagents."} +{"text": "We announce the coding-complete genome sequences of 23 severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) Omicron strains obtained from Bangladeshi individuals. The Oxford Nanopore Technologies sequencing platform was utilized to generate the genomic data, deploying ARTIC Network-based amplicon sequencing. Coronaviridae, genus Betacoronavirus) variant, known as the Omicron variant (B.1.1.529), was initially reported to the World Health Organization (WHO) on 24 November 2021 (A novel severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) . To summarize, 3,723,478 reads were generated . A nearly complete genome was obtained for each sample, and the genomic information is provided in https://github.com/cov-lineages/pangolin), followed by visual inspection using CLC Genomics Workbench v21.0 (Qiagen). In comparison with the reference genome (Wuhan Hu-1 [GenBank accession number NC_045512.2]), the spike protein-based signature amino acid variations assigned 22 of the sequences as sublineage BA.5 and 1 as BA.2.75. Genome sequencing has been playing a critical role in COVID-19 responses, because new variants are constantly evolving. Therefore, rapid sequencing and data sharing can contribute to the implementation of quick and informed public health decisions to mitigate COVID-19 consequences.As part of a nationwide coronavirus disease 2019 (COVID-19) surveillance program , nasopharyngeal swab specimens were obtained from routine diagnostic samples from patients across Bangladesh. Reverse transcription -PCR was performed using a commercially available novel coronavirus (2019-nCoV) nucleic acid diagnostic kit . A total of 48 SARS-CoV-2 RT-PCR-positive samples were subjected to sequencing. Viral RNA was extracted from nasopharyngeal swab samples using the QIAamp viral RNA minikit (Qiagen). ARTIC v3 primer-based multiplex PCR amplicons were generated and used as the sequencing libraries . LibrariThe data from this study can be found under GISAID accession numbers EPI_ISL_13439470, EPI_ISL_13439471, EPI_ISL_13439472, EPI_ISL_13439473, EPI_ISL_13439474, EPI_ISL_13439475, EPI_ISL_13439476, EPI_ISL_13439477, EPI_ISL_13439478, EPI_ISL_13439479, EPI_ISL_13439480, EPI_ISL_13439481, EPI_ISL_13574266, EPI_ISL_13574267, EPI_ISL_13574268, EPI_ISL_13574269, EPI_ISL_13439482, EPI_ISL_13574270, EPI_ISL_13574271, EPI_ISL_13439484, EPI_ISL_13439485, EPI_ISL_13439486, and EPI_ISL_14859273. The GenBank accession numbers are listed in"} +{"text": "Patients with dysphagia have impairments in many aspects, and an interdisciplinary approach is fundamental to define diagnosis and treatment. A joint approach in the clinical and videoendoscopy evaluation is paramount.To study the correlation between the clinical assessment (ACD) and the videoendoscopic (VED) assessment of swallowing by classifying the degree of severity and the qualitative/descriptive analyses of the procedures.cross-sectional, descriptive and comparative.held from March to December of 2006, at the Otolaryngology/Dysphagia ward of a hospital in the country side of S\u00e3o Paulo. 30 dysphagic patients with different disorders were assessed by ACD and VED. The data was classified by means of severity scales and qualitative/descriptive analysis.the correlation between severity ACD and VED scales pointed to a statistically significant low agreement (KAPA = 0.4) . The correlation between the qualitative/descriptive analysis pointed to an excellent and statistically significant agreement (KAPA=0.962) (p<0.001) concerning the entire sample.the low agreement between the severity scales point to a need to perform both procedures, reinforcing VED as a doable procedure. The descriptive qualitative analysis pointed to an excellent agreement, and such data reinforces our need to understand swallowing as a process. Multidisciplinary work in dysphagia is a common denominator advocated by researchers and clinicians, since dysphagic patients have losses in the medical, nutritional, physiotherapeutic, physiological and speech arenas, thus needing numerous professionals to serve all of their health care demandsThe most known methods used to assess swallowing are the clinical evaluation of swallowing (CES) and the instrumental tests of video-fluoroendoscopy (VFE) and swallowing video-endoscopy (SVE). CES can not make a definitive diagnosis of dysphagia; however, it is a component which allows us to understand its natureAmong the instrumental tests, VFE has been considered the \u201cgold standard\u201d; however, because of its high cost and scarcity of places where it can be performed, SVE has proven accessible and doable. Swallowing videoendoscopy is a simple test, of low cost and little invasiveness, besides being easily transportable, making it possible to do sequential evaluations in patients with mobility challenges. It allows one to observe the pharyngeal phase of swallowing; it allows the physician to order the patient to perform airway protection maneuvers - so as to help the physician guide the patient regarding a proper diet for these patientsConsidering the importance of SVE in the diagnosis of dysphagia, otorhinolaryngologists gain relevance in the work team, being the professional responsible for performing the exam. The ENT is in charge of interpreting the SVE in its functional and anatomical aspects, and such data is fundamental for diagnosis. The speech therapist can work together with the ENT during the exam, suggesting the assessment of therapeutic strategies.Considering the different functions of each swallowing assessment procedure it is necessary to understand and interpret the different signs and symptoms observed in order to pinpoint the participation of each evaluation procedure and thus establish the approach in cases of dysphagia.To study the correlation between the clinical evaluation (CES) and swallowing video-endoscopy (SVE) by classifying the degree of severity and the qualitative/descriptive analysis of the two evaluation procedures.The present cross-sectional, descriptive and comparative study was approved by the Ethics Committee of the institution under protocol # 796/2005. All the subjects signed the informed consent form.The evaluations were carried out in the dysphagia ward - ENT of a hospital in the country-side of the state of S\u00e3o Paulo between March and December of 2006. This ward is geared specially to patients with neurogenic dysphagia, diagnosed with Parkinson's disease (PD), Amyotrophic Lateral Sclerosis (ALS) and Machado-Joseph Disease (MJD).All the patients were submitted to CES and SVE. After the procedures, the results were discussed with the team, made up by an ENT physician, ENT residents, speech therapists and nutritionists.CES and SVE followed the procedures proposed by the present study (Attachment 1), which was built from the protocols of other authorsThe CES was done in a direct and indirect manner. The indirect one includes interview, structural and sensitive evaluation of the oral cavity and administration of food. The neck was ausculted during rest, during saliva swallowing and before, during and after the swallowing of food. Later on, compensatory maneuvers were studied in order to achieve a safe swallowing.SVE followed the procedures proposed (Attachment 1) and was carried out by an otorhinolaryngologist using the conventional video-endoscopic equipment. The speech therapist participated offering the different food quantities and consistencies to be studied, and also suggesting the evaluation of certain airway protection maneuvers. The exam was recorded in a DVD.The data was analyzed in the following fashion:- Step 1 - CES and SVE analysis by means of classification according to severity scales.The severity scale employed for CES was proposed by Furkim and Silva- Step 2 - Compare the degrees of severity between the CES and SVE scales.- Step 3 - Case-by-case classification according to the qualitative/descriptive analysis, based on the signs and symptoms observed during CES and SVE.The qualitative analysis of the signs observed in the CES and SVE was carried out once the scales did not provide subsidies for the therapeutic planning. The analysis was based on the comparison of the CES and SVE findings according to the criteria presented on The signs listed above make up the assessment protocol presented in the present study - which is descriptive, providing a broad view of the swallowing process and creation of specific treatment plans, since it is possible to identify the alteration shown.Each case had its two assessments (CES and SVE) compared according to the qualitative analysis criteria , and was- Step 4: Comparing the severity degree classification and the qualitative analysis.In order to understand the contribution of each procedure in the evaluation of swallowing we chose to analyze the agreement between the qualitative analysis and the degree of severity, by correlating the scales with the data which were identified in the procedures. After the correlation the data were grouped in the following way:1 - Group in which the CES and SVE severity scale indicated the same degree and the qualitative analysis indicated similar CES and SVE signs;2 - Group in which the severity scales indicated a greater degree by SVE, correlated with cases in which the qualitative analysis indicated a higher SVE degree, correlated with cases in which the qualitative SVE analysis indicated more signs; 3 - cases in which the CES indicated a greater severity in the scale and more data in the qualitative analysis.In order to describe the sample profile according to the study variables, we created frequency tables of the category variables (disease) with absolute (n) and percentage (%) values, and descriptive statistics with position and scatter values of the continuous variable (disease).The statistical analysis used the \u201cThe SAS System for Windows\u201d , version 8.02 software. The agreement analysis among the classifications used the kappa agreement coefficient. Kappa values above 0.75 meant excellent agreement and values between 0.40 and 0.75 meant intermediate agreement, values below 0.40 indicated low agreement among the classifications. The level of significance adopted for the statistical tests was 5% (p<0.05)We studied 30 adult dysphagic patients, 19 men and 11 women. Their mean age was 56 years, varying between 19 and 91 years. They presented different base diagnosis: Parkinson's disease, amyotrophic lateral sclerosis, Machado-Joseph disease, stroke, and four patients with other diagnoses. The correlation among the results from the severity scale classification in each evaluation procedure (step 2) is depicted on KAPA=0.400; CI95%: ; Z=2.74; p=0.006It is possible to see in Graph 1 that the number of agreeing evaluations according to the criteria on the severity scales (same degree of severity in CES and SVE) is not very relevant. The statistical analysis indicated an intermediate/low agreement (KAPA = 0.4) in a significant way (p=0.006). Kappa values below 0.4 indicate low agreement.The results from steps 3 and 4 - descriptive qualitative analysis classification and comparison of these results with the severity degree classification - indicated an agreement in most of the cases studied, as shown on The agreement between the degree of severity and the qualitative analysis indicated excellent degree (Kappa=0.962) in a statistically significant way (P<0.001). Such data reinforces that the qualitative/descriptive analysis proved to be an efficient evaluation method.The subjects in the sample have ages above 50 years, and this can justify the fact that they frequently present neurodegenerative alterations. There is a prevalence of patients with Parkinson's disease and ALS, followed by MJD and Stroke, for being the main population in the ward.According to Table 1, the agreement observed between CES and SVE severity scales was intermediate/low in a statistically significant way. Many studies correlate data found in the CES with findings from objective exams. Mathers-Schmidt and KurlinskiThe correlation of signs observed between CES and SVE, from the qualitative viewpoint is justified by the influence of the oral phase in the pharyngeal phase of swallowingNasal food reflux, which cause is inefficiency or failure of the soft palate muscles, reducing intraoral pressure - which can be seen at CES and SVE. There are correlations between aspiration risk and changes in soft palate movementMotor or sensitive involvement of facial muscles causes alterations to the neuronal information, which is the basis for a proper motor response regarding the food to be ingested. Such involvement may cause an increase in oral transit time, posterior deflection and/or inefficient ejection, resulting in alterations of the pharyngeal phase, especially stasis in the valleculasMultiple swallowing can be seen during CES and in SVE, which happen because of stasis in the oral cavity and vallecula - an inadequate oral phase which causes altered food pushing. Stasis in the pyriform sinuses and posterior pharyngeal walls are also considered alterations to the pharyngeal phase of swallowing, having seen that the mechanism impaired is pharyngeal wall mobility.Throat clearing, cough and alterations in neck auscultation are signs of penetration and/or aspiration seen during CES and can be confirmed by SVE -by the presence of food in the larynx without passing through the vocal folds (penetration) or by the presence of food below the vocal folds (aspiration)International studies comparing the use of neck auscultation, CES and objective exams to predict aspiration indicate that the association of two procedures is safer to find aspiration since the auscultation alone does not guarantee sensitivity to all the altered casesTohara et al.As we observe the data individually, we stress 13 disagreeing cases . In 5 ofWe can argue that in the 8 cases in which SVE showed more data, in two patients we observed stasis which were not suggested in the clinical evaluation, besides aspiration in two and laryngotracheal penetration in four cases.It is important to discuss the clinical evaluation that is not efficient to identify silent penetrations and aspirations, besides being little efficient to detect stasis in difficult places, which can cause late aspirations and inefficient treatment.Of the 30 patients investigated, in 26 who were clinically and vide-ofluoroendoscopically evaluated we noticed that it was not safe to forecast the presence of penetration/aspiration of liquids by the clinical evaluation34. Because of the need for complementary tests, the present study also presents the evaluation roadmap used in order to suggest assessment procedures which may help to better understand the swallowing process and provide complementary information to the clinical-therapeutic rationale of dysphagic patients.Many studies stress the importance of associating the CES and the objective examination in the assessment of swallowing. Such studies suggest that the two procedures are complementary and essential for the diagnosis and treatment planning of dysphagia, leading to the definition of more specific approaches for each patientSwallowing evaluation procedures must try to understand the swallowing process, in other words, the mental cognitive-status behaviorSVE is a highly efficient procedure since it does not require high investments because the equipment utilized is the one ENTs are already used to having, and also in terms of time because the entire test can be performed well under 20 minutes. The SVE also broadens the action scope of otorhinolaryngologists and allows for an interdisciplinary work with speech therapists.1.The severity classification agreement of CES and SVE proved to be intermediate/low, reinforcing the need to perform both assessment procedures.2.The agreement between the correlation of severity degree and qualitative/descriptive analysis severity proved to be excellent, reinforcing the qualitative/descriptive analysis as an efficient assessment method.On-going PhD thesis at the Institute of Language Studies in the field of Neurolinguistics, entitled: \u201cStudy and speech therapy follow up of post-stroke subjects\u201d. Such study is associated to the Neurolinguistics Integrated Project (CNPq: 521773/95-4).ATTACHMENT 1Evaluation RoadmapSwallowing Clinical and Video-endoscopic (SVE) Assessment Protocol (May/07)Exam date: _____ / _____ / _____Patient: ____________________ Reg.#: ____________________ BD: _______________ Age: _____Address: _________________________________________________________________Tel: _________________________ Informer: ___________________________________D.H.: ______________________________________________________________________P.H: ______________________________________________________________________Disease Duration: ____________________________________________________________Medicamentions: ____________________________________________________________Current complaint: ____________________________________________________________Swallowing complaint: Yes_____NoComplaint duration: ____________________________________________________________Prior feeding habits: _______________________________________________________Meal Records (24 hrs): __________________________________________________Usual Weight: __________ Current weight: _______________ Height:__________ BMI:Monthly family income: _________________________ # of family members: __________General health status:Heart disorders: ______________________________________________________________________High blood pressure: ______________________________________________________________________Pulmonary infections: ____________________________________________________________Gastric disorders: _________________________________________________________________Mouth and teeth alterations: ____________________________________________________________Malnutrition: ______________________________________________________________________Dehydration: ______________________________________________________________________Diabetes: (Type) ____________________________________________________________Tracheostomy: Present absentCannula: metal PVC plastic silicone with Cuff WO/cuffMechanical ventilation Present absentNon-invasive ventilation: Mask Nasal Swallowing complaint:Indirect clinical evaluationDirect clinical evaluationSigns of penetration/aspiration:Facial color alteration: ____________________________________________________Respiratory rate alteration: ____________________________________________________2 saturation alteration: ____________________________________________________OManeuvers utilized: ____________________________________________________Swallowing Videoendoscopic Evaluation1.Nasal cavitiesSeptum centered deviated R deviated L Non-obstructive irregularitiesMucosa pale edematous wet atrophicTurbinates normotrophic hypertrophic2.Rhinopharynx:Mucosa pale edematous wet atrophicEustachian tube ostium free obstructed3.Pharynx-soft palate sphincter:Phonation Complete closure Incomplete closure coronal sagittal circular circular with Passavant ringSwallowing Complete closure incomplete closure coronal sagittal circular circular with Passavant ring4.Hypopharynx Tongue base mobility proper altered _______________________Posterior wall mobility proper altered ______________________Vallecula normal lesion saliva stasisEpiglottis normal omega-shape lesion ______________________Arytenoids normal hyperemia edema Interarytenoid region normal hyperemia edemaPyriform recess free obstructed saliva stasis R LSensitivity normal reduced absent5.LarynxVocal folds Ventricular folds mobile normal paresis R L hyperconstriction R L Immobility R L Laryngeal asymmetry yes no arching R L sensitivity to the mechanical stimulus atrophy R L epiglottis normal altered lesion _________________ R L aryepiglottic fold normal altered other ______________________ subglottis normal altered6.Glottic closure complete incomplete consistent inconsistent posterior triangular slit mid-posterior triangular slit anterior spindle-like slit spindle-slit in all the extension hourglass-like slitSwallowing Videoendoscopic Evaluation (SVE)"} +{"text": "The presented method is an adaptation of NeuroKit2 to simplify and automate computation of the various mathematical estimates of heart rate variability (HRV) or similar time series. By default, the present approach accepts as input electrocardiogram's R-R intervals (RRIs) or peak times, i.e., timestamp of each consecutive R peak, but the RRIs or peak times can also stem from other biosensors such as photoplethysmography (PPGs) or represent more general kinds of biological or non-biological time series oscillations. The data may be derived from a single or several sources such as conventional univariate heart rate time series or intermittently weakly coupled fetal and maternal heart rate data. The method describes preprocessing and computation of an output of 124 HRV measures including measures with a dynamic, time-series-specific optimal time delay-based complexity estimation with a user-definable time window length. I also provide an additional layer of HRV estimation looking at the temporal fluctuations of the HRV estimates themselves, an approach not yet widely used in the field, yet showing promise (doi: 10.3389/fphys.2017.01112). To demonstrate the application of the methodology, I present an approach to studying the dynamic relationships between sleep state architecture and multi-dimensional HRV metrics in 31 subjects. NeuroKit2\u2032s documentation is extensive. Here, I attempted to simplify things summarizing all you need to produce the most extensive HRV estimation output available to date as open source and all in one place. The presented Jupyter notebooks allow the user to run HRV analyses quickly and at scale on univariate or multivariate time-series data. I gratefully acknowledge the excellent support from the NeuroKit team. Specifications tableThe methods section is structured as follows. First, following a brief rationale for the method I outline the HRV metrics computed. Second, I describe the implementation in Python. This section contains several elements defining the functions for executing the data loading, preprocessing and feature computation steps followed by data saving; as last step, I provide the code to tie everything together for a single step execution. At last, I present an application of the code to an open-source dataset and conclude with remarks for the broader usage.Heart rate variability (HRV) as a search term on PubMed rendered \u223c55,000 publications as of June 16, 2022. While first studies appeared in 1925, there has been a notable rise in scientific publishing around 1975 with some 400 papers appearing annually as of 2021. This is likely attributable to the steady increase in computational capacity and its access to it along with growing recognition of the HRV physiology and pathophysiology. For example, HRV has been recognized as a biomarker of health and stress in adult and developing organisms reflecting heart-brain interactions and resulting, among other observations, in the phenomenon of heart beat-evoked potentials, a direct reflection of bidirectional brain-heart communication The number of HRV estimates, sometimes also referred to as metrics or biomarkers, has grown as well, now exceeding 100, albeit it is understood that some of these estimates are collinear With the advent of Digital Health and increased utilization of wearable or ambient sensors to capture heart rate and other biological oscillations, the awareness of caveats in HRV analysis in contrast to the traditional electrocardiogram (ECG)-based approach also needs to rise Several toolboxes have been built to collate the existing methodologies in a more accessible format and foster the discovery of new biomarkers of health outcomes based on HRV and other physiological time series In parallel, the ecosystem of Python-based open-source packages for time series processing has also been maturing. One such package stands out in terms of methodological scope, functional depth, rich API and constant updates through a large international community of researchers: NeuroKit2. It is a Python Toolbox for Neurophysiological Signal Processing The presented method is an adaptation of NeuroKit2 to simplify and automate computation of the various mathematical estimates of HRV or similar time series The method describes preprocessing and computation of an output of 124 HRV measures including measures with a dynamic, time-series-specific optimal time delay-based complexity estimation with a user-definable time window length .Table 1HI also provide an additional layer of HRV estimation looking at the temporal fluctuations of the HRV estimates themselves, an approach not yet widely used in the field, yet showing promise.Finally, I present an application of the proposed HRV estimation pipeline to an open-source dataset from PhysioNet acquired in 31 subjects during sleep using Apple Watch and enriched with expert annotation of sleep states ,18,19.How does this methodology add to the existing set of techniques and tools? NeuroKit2\u2032s documentation is extensive. Here, I attempted to simplify things summarizing all the researcher needs to produce the most extensive HRV estimation output available to date as open source and all in one place. The presented Jupyter notebooks allow the user to run HRV analyses quickly and at scale on univariate or multivariate time-series data. I gratefully acknowledge the excellent support from the NeuroKit team.(1)Univariate or multivariate time series input; ingestion, preprocessing and computation of 62 HRV metrics.(2)Standardization of RRI window lengths and RRI duration-specific computation of complexity estimates.(3)Estimation of intra- and inter-individual higher order temporal fluctuations of HRV metrics.(4)Application to a sleep dataset recorded using Apple Watch and expert sleep labeling.The key features of the presented methodology are:The step-by-step approach is as follows.1. Create a dedicated virtual environmentYou may use conda or another environment manager such as pip or Docker. The choice boils down to your preferences and constraints: for example, certain Python packages can only be installed in pip and not in conda. For the proposed approach, I am not aware of any constraints that prevent the user from using conda. Ultimately, using a virtual environment will help you down the road to ensure your Python analytical pipeline keeps on working and does not get broken by unintended package updates and disrupted interdependencies. As an alternative to this conda step, I provide a Docker container here # call conda datanalysis environment!conda init bash!conda activate datanalysis #or use your own preferred venv2. Load the required and recommended packages.import neurokit2 as nkimport pandas as pdimport matplotlib.pyplot as pltimport numpy as npimport osimport scipy.iofrom pathlib import Pathfrom scipy.stats import variationfrom hmmlearn import hmm# load Matlab data filesfrom scipy.io import loadmat3. Import raw peaks.The source may be Matlab files or whatever input data format you may need. In the present example, we load a duo of files: corresponding maternal and fetal peak data. Your use case may differ. For example, you may load just one set of peak data, three or more sets of peak data derived from different ECG channels, an ECG-derived peak times channel and a PPG-derived peak times channel, etc. Simply amend the code accordingly by editing and adding the additional lines for each step as required.In this work, because the focus is on peak times or heart rate time series, an important step is skipped deliberately: the derivation of the peak data from the raw signal. This can be ECG, PPG or otherwise recorded blood pressure fluctuations (pulse). The HRV Task Force recommends checking for the presence of ectopic heartbeats, e.g., premature ventricular contractions (PVCs) ,43. Neur# Import raw peaks from the mat files; adjust to fit your input data formatf_filepath_peaks=Path.cwd/\"raw_peaks/f\" #fetal raw peaks mat files;m_filepath_peaks=Path.cwd/\"raw_peaks/m\" #maternal raw peaks mat files;4. Get ready for batch file processing. a. Create a list of relevant files in directoryf_peaks_files\u00a0=\u00a0[f for f in sorted(f_filepath_peaks.iterdir) #create a list of relevant files in directoryif f.suffix == '.mat']m_peaks_files\u00a0=\u00a0[f for f in sorted(m_filepath_peaks.iterdir) #create a list of relevant files in directoryif f.suffix == '.mat'] b. Read one file at a time using the above list, trim, clean, convert to RRI c. The present syntax is for a specific ECG format; adopt to your use case d. Iterate over i files in the f_ or m_ peaks_files lists and extract the correct peaks channel as numpy arraydef read_mat_file:# Import 5th row of the mat file's peak data which has 1000 Hz sampling rate; you may need to adopt this step as per your data structuref_file_PEAK_raw=loadmat(f_peaks_file)m_file_PEAK_raw=loadmat(m_peaks_file)f_peaks=f_file_PEAK_raw['fetal_Rpeaks'][4] #this is my 5th row ECG-SAVER-extracted peaks channelm_peaks=m_file_PEAK_raw['mother_Rpeaks'][4] #this is my 5th row# Trim trailing zerosf_peaks_trimmed=np.trim_zerosm_peaks_trimmed=np.trim_zeros# Artifact removal [see next section for details]f_clean_peaks=nk.signal_fixpeaks #allow 80\u2013180 bpmm_clean_peaks=nk.signal_fixpeaks #allow 40\u2013150 bpm# Document artifacts from each run as clean_peaks_rri[0]: build a dataframe for each file over all segments# Convert to RRIf_rri\u00a0=\u00a0peaks_to_rrim_rri\u00a0=\u00a0peaks_to_rrireturn f_clean_peaks[1], m_clean_peaks[1], f_rri, m_rri, f_clean_peaks[0], m_clean_peaks[0] e. Proceed with the steps below: HRV compute, save. Cf. final section (10).Convert peaks to RRIs5. Using NeuroKit2\u2019s functions to take the cleaned peaks as input: peaks_to_rri# Some NK functions take RRIs# So use these UDFs borrowed from the NK package: convert peaks to RRI on the cleaned peaks outputdef peaks_to_rri:rri\u00a0=\u00a0np.diff(peaks) / sampling_rate * 1000if interpolate is False:return rrielse:# Minimum sampling rate for interpolationif sampling_rate < 10:sampling_rate\u00a0=\u00a010# Compute length of interpolated heart period signal at requested sampling rate.desired_length\u00a0=\u00a0int(np.rint(peaks[-1]))rri\u00a0=\u00a0signal_interpolate,**kwargs)return rri, sampling_rate6. Artifact correction. a. This is a key step that will influence everything downstream. It is often not reported clearly in the studies. b. Adjust the sampling rate and threshold settings as appropriate for your data. c. Note that we save the logs of artifact correction for audit purposes. Sometimes, you need to know why a certain dataset behaved in the way it did and this documentation can come in handy.# Artifact correction# Integrated into the above UDF red_mat_file, but you may find this useful to adopt elsewhere in your codehttps://neurokit2.readthedocs.io/en/latest/functions.html#neurokit2.signal.signal_fixpeaks# Artifact removal on peaks using Kubios: write into UDF taking trimmed_peaks input# caution: nk.signal_fixpeaks takes peaks, not RRI!# nk.signal_fixpeaks saves the corrected peak locations to the [1] index of the output data sturcture# accessible like so: clean_peaks[1]# Review the settings for fetal versus maternal RRI inputs! Adjust to match your RRI physiology# interval_min \u2013 minimum interval btw peaks | interval_max \u2013 maximum interval btw peaks.f_clean_peaks=nk.signal_fixpeaksm_clean_peaks=nk.signal_fixpeaks# Convert trimmed and cleaned peaks to RRI (using _trimmmed_ raw peaks as input!)rri_clean\u00a0=\u00a0peaks_to_rri7. Compute all HRV metrics segment-wise a. Rather than computing on the entire time series at once and trading the reproducibility as a result , we i. set the segment duration explicitly a priori and ii. Take advantage of the segment-wise estimate of HRV to investigate the higher-order structure of the HRV metrics themselves. b. For complexity estimates, note that we use segment duration-specific estimation of optimal time delay rather than using default settings. This allows us to compute FuzzEn, FuzzEnMSE, FuzzEnRCMSE, cApEn specifically for the optimal time delay. Why select these complexity estimates? It is heuristic. I have found Fuzzy Entropy estimates to be understudied and robust, especially with RRI time series. This is hence worthy of additional attention in future studies deploying complexity estimates. Other time-delay-dependent complexity estimates can be plugged in here, all made available via NeuroKit2 API.# UDF compute_HRV# This UDF computes all [regular and extra non-linear] HRV metrics segment-wise for a filedef compute_HRV:# Regular HRV matrix (from peaks)duration_peaks=peaks[len(peaks)-1] #gives me the duration in samplesdivider=duration_peaks/1000/60/5 #sampling_rate, 5 min window segmentssegment=np.array_split #divide in segments of 5 min; the last segment may be shorter; discard during statistical analysis on HRV metricssegment_df=pd.DataFramefor i in range(len(segment)):segment=nk.hrvsegment_df\u00a0=\u00a0pd.concat# Additional nonlinear HRV metrics from RRIssegment=np.array_split #divide _RRI_ in segments of 5 min; the last segment may be shorter; discard during statistical analysis on HRV metrics#create my dataframe structure to which to append the list as a row in the followingextra_columns=extra_complexity_df=pd.DataFrame(columns=extra_columns)df_length=len(extra_complexity_df)extra_complexity_df_total=pd.DataFrame(columns=extra_columns)for i in range(len(segment)):optimal_complexity_parameters\u00a0=\u00a0nk.complexity_delayextra_complexity_segment_fuzen=nk.entropy_fuzzyextra_complexity_segment_fuzen_mse=nk.complexity_fuzzymseextra_complexity_segment_fuzen_rcmse=nk.complexity_fuzzyrcmseextra_complexity_segment_capen=nk.entropy_approximatesegment_duration=np.sum(segment[i])/1000 #segment duration in seconds#join all individual output floats including values of segment[i] - i.e., for each segment - and its duration in seconds as numpy.sum(segment[1])/1000extra_complexity\u00a0=\u00a0extra_complexity_df.loc[df_length]=extra_complexityextra_complexity_df_total\u00a0=\u00a0pd.concat# simply concatenate both df's horizontally; this scales allowing addition of other df's from bivariate computationsfinal_df=pd.concatreturn final_df #this is per subject with SubjectID output along on the right side8. Compute higher order HRV metrics.Here I made explicit and expanded upon what we attempted first in def compute_basic_stats:# compute mean and variation# assuming \"ts_data\" is where my HRV metric values list is per subjectmean=np.meancoeff_variation=variation# this function works similar to variation but works purely with numpy# cv\u00a0=\u00a0lambda x: np.std(x) / np.mean(x)# First quartile (Q1)Q1\u00a0=\u00a0np.percentile# Third quartile (Q3)Q3\u00a0=\u00a0np.percentile# Interquaritle range (IQR)IQR\u00a0=\u00a0Q3 - Q1midhinge\u00a0=\u00a0(Q3\u00a0+\u00a0Q1)/2quartile_coefficient_dispersion\u00a0=\u00a0(IQR/2)/midhinge# adding entropy estimate; this is experimental!# ts_entropy=nk.entropy_sample(ts_data)# yielding error \"could not broadcast input array from shape into shape (7)\" | the following syntax fixes that and is more elegant in that it estimates optimal delay# optimal_complexity_parameters\u00a0=\u00a0nk.complexity_delay# ts_entropy=nk.entropy_fuzzy# still yielding len errorts_entropy=nk.entropy_shannon(ts_data)basic_stats=, quartile_coefficient_dispersion, ts_entropy]return basic_stats#HMM Modeldef do_hmm(ts_data):#ts_data=numpy.array(data)gm\u00a0=\u00a0hmm.GaussianHMM(n_components=2)gm.fit)hmm_states\u00a0=\u00a0gm.predict)#hmm_states=[states.tolist]print(hmm_states)return hmm_states # next, add _states_ iteratively for all subjects to states_Uber list to spot patterns# deal with last column which is string and needs to be skippeddef skip_last_column(lst):# unpack the list of listsdef Extract(lst):return [item[0] for item in lst]# check for string in the first sublist element_to_check=Extract(lst)[0]return isinstance #return Boolean for presence of string in the sublistdef compute_higher_HRV:# assuming \"final_df\" is the dataframe where the HRV metric values are listed segment-wise per subject# compute basic statshigher_order_basic_stats=for i in range: #last column is the SubjectID string, so skipping it belowmetric=final_df.iloc].values#String skip logic to skip over SubjectID columnif skip_last_column == False:results_temp1=compute_basic_stats,SubjectID)higher_order_basic_stats.append(results_temp1)else:i+=1basic_stats=pd.DataFramecolumns=final_df.columns[0:63] #make sure I don't select the last column which has SubjectIDbasic_stats.index=[columns]basic_stats_final=basic_stats.T #transpose# compute HMM stats: computing on just 7 data points leads to errors in some instances, so omit for now and revisit later when used on longer HRV metrics time series, say, several hours# Estimate HMM probabilities output for a given segmented HRV metric# Then compute basic_stats on this estimate;# Hypothesis: stable tracings will have tight distributions of HMM values and resemble entropy estimates;# This will apply statistically significantly for physiologically stressed (tighter distributions) versus control subjects#higher_order_basic_stats_on_HMM=#for i in range: #last column is the SubjectID string, so removing it# metric=final_df.iloc].values# print)# some HRV metrics have NaNs and the \"do_hmm\" script crashes on those;# Adding logic to skip if NaN is present# a=any(pd.isna(metric)) #checking if _any_ values in HRV metrics list are NaN# b=skip_last_column(metric)# skip_reasons={a:'True', b:'True'}#NaN or string skip logic# if any(skip_reasons):# i+=1# else:# results_hmm_temp2=do_hmm(metric)# print(results_hmm_temp2)# print(type(results_hmm_temp2))# results_stats_hmm_temp=compute_basic_stats #j being the file number; != SubjectID# higher_order_basic_stats_on_HMM.append(results_stats_hmm_temp)#basic_stats_on_HMM=pd.DataFrame#basic_stats_on_HMM.index=[columns]#basic_stats_on_HMM_final=basic_stats_on_HMM.T #transpose#higher_final_df=pd.concathigher_final_df=basic_stats_final #leaving the syntax above for when the data allow HMM analysisreturn higher_final_df #this includes SubjectID9. Save everything.Gather all data from the separate data frames into spreadsheets for further analyses.# Execute the entire analysisFor each file :- call read_mat_file- call compute_HRV- save results to Excel10. Execute the entire pipeline calling the above defined functions# Initialize data structuresf_artifacts_log=m_artifacts_log=Uber_fHRV=Uber_mHRV=Uber_higher_fHRV=Uber_higher_mHRV=i=0# Compute & save into listswhile i<=len(f_peaks_files)-1: #careful - this assumes equal number of fetal and maternal raw files# read the peaks file, trim trailing zeros, artifact correct it, convert to RRIs and return the resultsf_clean_peaks, m_clean_peaks, f_rri, m_rri, f_clean_peaks_artifacts, m_clean_peaks_artifacts=read_mat_filefSubjectID=format(f_peaks_files[i].stem)mSubjectID=format(m_peaks_files[i].stem)f_artifacts_log_i=m_artifacts_log_i=#save artifact processing log from each file starting with its real SubjectIDf_artifacts_log.append(f_artifacts_log_i)m_artifacts_log.append(m_artifacts_log_i)# compute all HRV metricsffinal=compute_HRVmfinal=compute_HRV# update the UBER dfUber_fHRV.appendUber_mHRV.append# compute higher_order HRV metricsfhigher_final=compute_higher_HRVmhigher_final=compute_higher_HRV# update the UBER_higher_dfUber_higher_fHRV.appendUber_higher_mHRV.appendi+=1if i>len(f_peaks_files):breakprint('Computation completed.')# save artifacts logsdf_Uber_f_artifacts\u00a0=\u00a0pd.DataFrame.from_records(f_artifacts_log) #edit the name as neededdf_Uber_m_artifacts\u00a0=\u00a0pd.DataFrame.from_records(m_artifacts_log) #edit the name as neededdf_Uber_f_artifacts.to_exceldf_Uber_m_artifacts.to_excel# save HRV resultsUber_fdf=pd.concat(Uber_fHRV)Uber_fdf.to_excelUber_mdf=pd.concat(Uber_mHRV)Uber_mdf.to_excelUber_higher_fdf=pd.concat(Uber_higher_fHRV)Uber_higher_fdf.to_excelUber_higher_mdf=pd.concat(Uber_higher_mHRV)Uber_higher_mdf.to_excel11. Method validation: Demonstrating the performance of the proposed HRV pipeline in a retrospective analysis of a polysomnography dataset recorded with Apple Watch.date (in seconds since polysomnography start) stage .As validation dataset, the data by Walch et\u00a0al. was used which is available from PhysioNet. The team acquired heart rate data in 31 subjects during sleep using Apple Watch and enriched the data with expert annotation of sleep states ,18,19. T(1)The heart rate data, extracted from the Apple watch, is publicly available.The data were recorded from 31 subjects during sleep, averages 7.3\u00a0h, and comes expertly annotated with sleep state labels.(2)The authors provided a script on GitHub for how to enable such data extraction in the future. This should make such a demonstration particularly relevant for future studies (3)This dataset ties in well with the accompanying publication in the Journal of Biomedical Informatics This dataset is appealing for several reasons for the intended objective of HRV pipeline validation:Interestingly, the presented HRV pipeline yields insights into sleep state dynamics reflected in HRV which I discuss below. The underlying code, based on the presented HRV estimation pipeline, and all generated data can be found on FigShare and DockerHub \u2022The number of HRV metrics was increased from 63 (as per above Jupyter notebook) to 124 HRV metrics, computed on this entire PhysioNet dataset cf. .\u2022Since this is intended as an example only, for ease of computing, I used the entire dataset to compute HRV ; I also set optimization for complexity parameters to default settings for the same reasons. The code is available for those who wish to dive deeper and have the resources to do so.\u2022\u25cbThe code is presented to determine the total duration of each sleep state per recording using the labeled files \u25cbThe saved continuous and averaged SampEn and RMSSD data are provided for the entire cohort, for future analyses \u25cbThe code and the visualizations of each subject's time course are provided for heart rate, SampEn, and sleep state architecture \u25cbR\u00a0=\u00a00.39, p\u00a0=\u00a00.03) or CV RMSSD (R\u00a0=\u00a00.55 and p\u00a0=\u00a00.001), respectively . The findings show again, this time quantitatively across the cohort, a degree of correlation between HRV complexity fluctuations and duration of deep sleep . Note thectively .Fig. 2Sp\u25cbNext, I expanded the scope by assessing and showing the correlations systematically for all subjects, all 124 HRV metrics, and all sleep states [46].FigSample Entropy (SampEn) is reported as an example complexity metric of HRV over time as it changes during sleep along with the traditional linear time-domain metric RMSSD; these are plotted along with the heart rate and sleep state architecture (using the supplied labels). This approach can contribute to studying these relationships systematically and develop open source algorithms to reliably detect sleep states from PPG-derived HRV data.To analyze this dataset, I expanded the presented HRV pipeline further and deployed it in several ways that can be used as a basis for future studies as follows.I suggest the following implications for future work. First, the dataset and the presented findings can be studied further using machine learning tools to derive an optimal HRV metric-based predictor of the NREM states or REM states\u2019 duration. Second, the richness of the temporal fluctuations can be further harnessed for classification and prediction using the code for hidden Markov mechanisms (HMM). I consider this to be out of scope for the present manuscript but provide the necessary code.\u2022https://hub.docker.com/r/mfrasch/hrv-pipeline. The notebook is deposited on GitHub pages (https://martinfrasch.github.io): for viewing online here (https://martinfrasch.github.io/MethodsX%20R1%20HRV%20pipeline%20v4.1.html)All data produced during this analysis has been deposited in FigShare at 10.6084/m9.figshare.20076464 \u2022https://martinfrasch.github.io/MethodsX%20R1%20HRV%20pipeline%20v4.1%20FINAL.ipynb)and as downloadable Jupyter notebook here can be found here (https://chart-studio.plotly.com/\u223cmfrasch/6)12. Data availabilityFinal remarksThe presented HRV computation pipeline in Python using the API of NeuroKit2 package is shown based on the use of the maternal-fetal trans-abdominally derived non-invasive ECG signal followed by maternal and fetal ECG extraction using SAVER algorithm Recent advances in the in silico modeling of physiological systems open avenues for discovery of novel and deeper understanding of the existing HRV metrics The literature indicates a high potential of HRV biomarkers to serve as predictors of important health outcomes such as cardiac or mental health as well as in critical care and disorders of consciousness .The author declares the following financial interests/personal relationships which may be considered as potential competing interests:MGF holds patents on EEG and ECG processing. MGF is founder of and consults for Digital Health companies commercializing predictive potential of physiological time series for human health."} +{"text": "Understanding dysregulation of the eukaryotic initiation factor 4F (eIF4F) complex across tumor types is critical to cancer treatment development. We present\u00a0a protocol and accompanying R package \u201ceIF4F.analysis\u201d. We describe analysis of copy number status, gene abundance and stoichiometry, survival probability, expression covariation, correlating genes, mRNA/protein correlation, and protein co-expression. Using publicly available large multi-omics data, eIF4F.analysis permits computationally derived and statistically powerful inferences regarding initiation factor regulation in human cancers and clinical relevance of protein interactions within the eIF4F complex.For complete details on the use and execution of this protocol, please refer to Wu and Wagner (2021). \u2022An R package to analyze eIF4F dysregulation, using large multi-omics datasets\u2022Detailed steps for software installation, data download, and library initialization\u2022Guidelines for deriving biological and clinical inferences from multiple analyses\u2022Illustrated code structure explains bioinformatics pipeline assembly technique Publisher\u2019s note: Undertaking any experimental protocol requires adherence to local institutional guidelines for laboratory safety and ethics. Understanding dysregulation of the eukaryotic initiation factor 4F (eIF4F) complex across tumor types is critical to cancer treatment development. We present a protocol and accompanying R package \u201ceIF4F.analysis\u201d. We describe analysis of copy number status, gene abundance and stoichiometry, survival probability, expression covariation, correlating genes, mRNA/protein correlation, and protein co-expression. Using publicly available large multi-omics data, eIF4F.analysis permits computationally derived and statistically powerful inferences regarding initiation factor regulation in human cancers and clinical relevance of protein interactions within the eIF4F complex. In complex biological systems, biological processes rely on participation of multiple proteins through their physical and specific functional interactions. Clinical relevance of protein-protein interactions (PPIs) has been traditionally investigated through wet-lab approaches using tissue cultures and animal models, which are often complicated by easily-perturbed cell culture conditions or poor recapitulation of human diseases.in\u00a0vivo gene co-expression and protein interactions has been reported after analyzing large-scale data from multiple species including human.,,The validity of our method derives from the biological requirement for interacting proteins to be simultaneously present within a cell, which implies that synthesis and degradation of interacting proteins ought also to coincide.EIF4F genes and their protein interactions from analysis results. Our innovative computational method can be extended for broader application to reveal the clinical relevance of PPIs in assorted disease conditions, given additional disease-related datasets, custom lists of candidate protein complexes with subunits, and implementation of more association analyses. However, the protocol presented here is narrowly targeted to the production of our eIF4F results.As we previously published,1.Download\u00a0& install R 4.2.1, if not already installed.2.https://www.rstudio.com/products/rstudio/download/).Download\u00a0& install RStudio, if not already installed: )install.packages(\"BiocManager\")> BiocManager::install(version\u00a0= \"3.15\")# install required R packages:> bio_pkgs <- c> BiocManager::install(bio_pkgs)# load required packages> lapplyNote: Except for the \"RCurl,\" \"R.utils,\" and \"utils\" packages that are required for Download.R file, all dependent packages will be installed in the next step. However, installation of all dependent package takes a long time and sometimes gives installation errors related to the setting of individual users. Thus, we recommend users to manually install dependent packages before installing eIF4F.analysis package.CRITICAL: If installation of some dependent packages requires to install or update non-Bioconductor packages, use install.packages for non-Bioconductor packages.The required packages will be automatically installed the first time you run eIF4F.analysis.4.Install eIF4F.analysis package in RStudio.> devtools::install_github5.Load eIF4F.analysis package in RStudio.> libraryInstall the development version of eIF4F.analysis package from GitHub with the following commands:6.Open the terminal and run the following command line to clone our GitHub repository for\u00a0eIF4F.analysis package. This step will store all package files under home directory as \u223c/eIF4F.analysis A.Figure\u00a0https://github.com/a3609640/eIF4F.analysis$ git cloneNote: Installing eIF4F.analysis package in the Linux system is highly recommended.7.Under the \u223c/eIF4F.analysis/Script folder B, there # default directory for data download and output storage> data_file_directory <- Sys.getenv(c(\"DATA_FILE_DIRECTORY\"), unset\u00a0= \"\u223c/eIF4F.analysis/eIF4F_data\")> output_directory <- Sys.getenv(c(\"OUTPUT_DIRECTORY\"), unset\u00a0= \"\u223c/eIF4F.analysis/eIF4F_output\")CRITICAL: By default, Download.R generates root directory paths \u223c/eIF4F.analysis/eIF4F_data and \u223c/eIF4F.analysis/eIF4F_output. These two directory paths are also set inside the package (in Load.R), and the values must agree in both places. The descriptions of the steps that follow assume the use of the default eIF4F_output location.We provide two R scripts in our GitHub repository for users to download datasets and to run eIF4F.analysis package. Users need to clone the package files from GitHub, and set up the directories for input and output files.Timing: 1 h8.Run the Download.R file in RStudio with the following command line.> sourceNote: Download times were observed with a 100 Mbps internet connection. Remote data sources mentioned here may offer limited and lower bandwidth.CRITICAL: By default, user will be able to clone all the scripts and R codes from Github repository, to download datasets, and to store analysis results (by default) under a single folder \u223c/eIF4F.analysis.Execution of Download.R script creates two folders: \u223c/eIF4F.analysis/eIF4F_data and\u00a0\u223c/eIF4F.analysis/eIF4F_output C. Downlo\u2022Software: R version 4.2.1 and RStudio (2022.07.1 build 554).\u2022Operating systems: The package was developed and tested with Linux OS (Pop!_OS 22.04 LTS).CRITICAL: We recommend Linux to run the package.\u2022\u25cbMemory: 48 GB minimum.\u25cbProcessors: tested with Intel Core i7-8700k CPU (6 cores), Intel Core i7-8550U CPU (4 cores).\u25cbDisk space: 30 GB minimum. The total disk space for eIF4F.analysis folder is 26.2 GB, including all downloads and the analysis results.Computer hardware:https://a3609640.github.io/eIF4F.analysis/reference/index.html.The package includes one initiation step, and seven major analysis steps. User can execute each analysis step with one exported function that calls a group of internal functions for data process, data analysis and plotting to achieve a set of analyses. This package organizes all functions related to each analysis step together as one R script under \u223c/eIF4F.analysis/R Analysis.R contains ten exported functions in the package to initialize package and to execute all analyses presented in Wu and Wagner (2021).Timing:\u00a0<\u00a05\u00a0min1.Run the following command line to create the output directories to store the output files.> initialize_dirNote:initialize_dir creates sub-directories under \u223c/eIF4F.analysis/eIF4F_output and stores them as \u201cTCGA_CNV_value.csv\u201d, \u201cTCGA_CNV_sampletype.csv\u201d and \u201cTCGA_CNVratio_sampletype.csv\u201d under the ProcessedData folder.b.i.TCGA_GTEX_RNAseq_sampletype comes from the recomputed RNAseq dataset from both TCGA and GTEx, \"TcgaTargetGtex_RSEM_Hugo_norm_count\", and the annotation dataset \"TcgaTargetGTEX_phenotype.txt\".initialize_RNAseq_data reads the recomputed RNAseq data from both TCGA and GTEx. The implementation details of each operation are within the \u223c/eIF4F.analysis/R/DEG.R file. initialize_RNAseq_data sets one global variable TCGA_GTEX_RNAseq_sampletype for the gene expression analysis (step-3), PCA (step-5) and correlating gene analysis (step-6), and stores TCGA_GTEX_RNAseq_sampletype as \u201cTCGA_GTEX_RNAseq_sampletype.csv\u201d in the ProcessedData folder.c.i.TCGA_RNAseq_OS_sampletype comes from three datasets: the RNAseq dataset \"EB++AdjustPANCAN_IlluminaHiSeq_RNASeqV2.geneExp.xena\", the survival dataset \"Survival_SupplementalTable_S1_20171025_xena_sp\" and the annotation dataset \"TCGA_phenotype_denseDataOnlyDownload.tsv\".initialize_survival_data reads the RNAseq and patient survival data from TCGA.\u00a0The implementation details of this operation are within the \u223c/eIF4F.analysis/R/Survival.R file. This function sets the global variable TCGA_RNAseq_OS_sampletype for survival analysis (step-4), and store TCGA_RNAseq_OS_sampletype as \u201cTCGA_RNAseq_OS_sampletype.csv\u201d inside the ProcessedData folder.d.i.This function sets three global variables for the CCLE data: (1) CCLE_RNAseq contains the\u00a0RNAseq data derived from \"CCLE_expression_full.csv\", (2) CCLE_Anno contains the\u00a0annotation data derived from \"sample_info.csv\", and (3) CCLE_Proteomics contains\u00a0the protein expression level data derived from \"protein_quant_current_normalized.csv\". This function stores CCLE_RNAseq, CCLE_Anno and CCLE_Proteomics as \u201cCCLE_RNAseq.csv\u201d, \u201cCCLE_Anno.csv\u201d, and \u201cCCLE_Proteomics.csv\u201d inside the ProcessedData folder.ii.This function sets two global variables as data frames for the CPTAC LUAD data published in Gillette et\u00a0al. (2020)initialize_proteomics_data reads the proteomics related data from CCLE and CPTAC LUAD, including: proteomics data, annotation data for cancer types, RNAseq data for the correlation analysis on protein RNA levels (step-7 and step-8). The implementation details of this operation are within the \u223c/eIF4F.analysis/R/RNAProCorr.R file.e.i.CPTAC_LUAD_Phos contains the phosphoproteomics data published in Gillette et\u00a0al. (2020)ii.CRITICAL: The first run of data initialization functions creates and stores the processed data from download files. Following runs of the initialization functions will check the existence of processed data files and read them, which take a much shorter time to complete than the first run. CPTAC_LUAD_Clinic_Sampletype contains the annotation data and is derived from\u00a0\"S046_BI_CPTAC3_LUAD_Discovery_Cohort_Clinical_Data_r1_May2019.xlsx\" and \"S046_BI_CPTAC3_LUAD_Discovery_Cohort_Samples_r1_May2019.xlsx\".initialize_phosphoproteomics_data reads phospho-proteomics-related data from CPTAC LUAD, and sets two global variables with data frames for the protein expression\u00a0analysis (step-8). The implementation details of this operation are in the \u223c/eIF4F.analysis/R/Proteomics.R file. This function stores two global variables CPTAC_LUAD_Phos and CPTAC_LUAD_Clinic_Sampletype as \u201cCPTAC_LUAD_Phos.csv\u201d and \u201cCPTAC_LUAD_Clinic_Sampletype.csv\u201d in the ProcessedData folder.Run the following command line to acquire omics datasets from the download data files.Initialization processes rely on three exported functions to define subdirectories for output data, to define the graphic formats (font size and style), and to load data from downloaded data files. The definitions of three initialization functions were stored in \u223c/eIF4F.analysis/R/Load.R.Timing:\u00a0<\u00a01\u00a0minEIF4F genes across TCGA tumors and creates the analysis results both on screen and as pdf files stored in \u223c/eIF4F.analysis/eIF4F_output/CNV folder.4.Run the following command line in RStudio.> EIF4F_CNV_analysisNote:EIF4F_CNV_analysis is a wrapper function of three internal composite functions that take input data frames and call internal functions for analysis. Note:.plot_bargraph_CNV_TCGA takes the data frame TCGA_CNV_sampletype and calculates the frequency of each CNV status for EIF4F genes in all tumors combined from 33 TCGA cancer types. Its output is a stacked bar plot that ranks the EIF4F gene by the frequencies of copy number gain from individual TCGA cancer types. This function produces a CNV ratio boxplot for each gene in individual TCGA cancer types analysis to associate survival probabilities with gene expression. This function takes arbitrary gene expression cutoff, 0.2 for 20% or 0.3 for 30%, to stratify the patient groups based on the top or bottom precents of gene expression within their tumors. This function performs KM analysis on all combined TCGA cancer types regression method. This function can perform survival analyses on all combined TCGA cancer types or individual cancer type such as \u201clung adenocarcinoma\u201d as an argument. The function performs both univariable Cox-PH analysis using a single gene expression as the dependent variable on Timing:\u00a0<\u00a030\u00a0minEIF4F subunits from tumor or healthy samples, and outputs results to the \u223c/eIF4F.analysis/eIF4F_output/COR folder.8.Run the following command line in RStudio.> EIF4F_Corrgene_analysisNote:EIF4F_Corrgene_analysis is a wrapper function to call one internal composite function that takes input data frames and calls internal functions for analysis. Note:.plot_Corr_RNAseq_TCGA_GTEX takes the data frameTCGA_GTEX_RNAseq_sampletype and selects RNAseq data from two sample types: TCGA tumors, or GTEx healthy tissues. This function separately identifies correlating genes (CORs) of EIF4E, EIF4A1, EIF4G1, and EIF4EBP1 from tumor samples or from healthy tissues. The significant CORs for each EIF4F subunit are identified and classified as positive or negative CORs (posCORs and negCORs). This function analyzes the overlap posCORs or negCORs of four EIF4F subunits from tumor samples or healthy tissues as Venn plots datasets, and outputs results to the \u223c/eIF4F.analysis/eIF4F_output/RNApro folder.Timing:\u00a0<\u00a05\u00a0min10.Run the following command line in RStudio.> EIF4F_Proteomics_analysisNote:EIF4F_Proteomics_analysis is a wrapper function to call two internal composite functions that take input data frames and call internal functions for analysis. Note:.plot_scatterplot_protein_LUAD takes the data frame CPTAC_LUAD_Proteomics and selects the proteomics data of tumor samples. It analyzes the correlation between two input proteins across the LUAD tumor samples protein expression in different tumor stages, and outputs results to the \u223c/eIF4F.analysis/eIF4F_output/proteomics folder.All analysis results are produced as plots on screen and as pdf files in output file directories. Examples of results from each analysis step are shown in The execution of this package relies on specific exported functions and does not allow parameter input within the package. However, for each analysis step, the internal composite functions, the required input data frames, and the dependent functions are stored within one R file E. Most ieIF4F.analysis package relies on many dependent packages for data analysis and plotting. Because those dependent packages are widely used for individual applications, their combined usage provides great convenience for users to achieve a thorough understand of eIF4F functions. However, a major limitation of this approach is that incompatibility may occur with future version changes, since those packages are maintained independently of each other.This package aims to provide an easy method to reliably reproduce the results from Wu and Wagner (2021),Attaching package: \u2018eIF4F.analysis\u2019The following objects are masked _by_ \u2018.GlobalEnv\u2019:initialize_data, initialize_dir, initialize_formatError messages during Preparation three: installing and loading eIF4F.analysis.It means that you have objects present in your global environment with the same name as (exported) things in your package. Clean the environment and reload the package.> EIF4F_Corrgene_analysisFailed with error: org.Hs.egPFAM is defunct. Please use select if you need access to PFAM or PROSITE accessions.Note: The object org.Hs.egPFAM has not been not used for any operation in the script, thus this error message does not affect the results of the analyses.While running EIF4F_Corrgene_analysis at step-6: analyze the correlating genes (CORs) of EIF4F subunits, you run into the following error:> library(org.Hs.eg.db)Run the following command to reload the \u201corg.Hs.eg.db\u201d package. If the same error message of defunct org.Hs.egPFAM appears, please reinstall the \"org.Hs.eg.db\" package to the latest version (3.15.0).gerhard_wagner@hms.harvard.edu).Further information and requests for resources should be directed to and will be fulfilled by the lead contact, Gerhard Wagner (This study did not generate new unique reagents."} +{"text": "Lacunicambarus thomai using video surveillance to determine their degree of surface activity and behavioral patterns. Throughout 664 hrs of footage, we observed a surprisingly high amount of activity at the surface of their burrows\u2014both during the day and night. The percentage of time that individual crayfish was observed at the surface ranged from 21% to 69% per individual, with an average of 42.48% of the time spent at the surface across all crayfish. Additionally, we created an ethogram based on six observed behaviors and found that each behavior had a strong circadian effect. For example, we only observed a single observation of foraging on vegetation during the day, whereas 270 observations of this behavior were documented at night. Overall, our results suggest that burrowing crayfishes may exhibit higher levels of surface activity than previously thought. To increase our understanding of burrowing crayfish behaviors ecology, we encourage the continued use of video-recorded observations in the field and the laboratory.Opposed to most crayfish species that inhabit permanent bodies of water, a unique burrowing lifestyle has evolved several times throughout the crayfish phylogeny. Burrowing crayfish are considered to be semi-terrestrial, as they burrow to the groundwater\u2014creating complex burrows that occasionally reach 3 m in depth. Because burrowing crayfishes spend most of their lives within their burrow, we lack a basic understanding of the behavior and natural history of these species. However, recent work suggests that burrowing crayfishes may exhibit a higher level of surface activity than previously thought. In the current study, we conducted a behavioral study of the Little Brown Mudbug, Procambarus clarkii, Procambarus virginalis, Faxonius virilis); all of which inhabit lentic and lotic aquatic environments. Although most crayfish species inhabit surface water systems like streams, lakes, rivers, ponds, and marshes, crayfishes have repeatedly evolved a semi-terrestrial burrowing lifestyle throughout their phylogeny . Although the exact function of these excursions is unknown, it is unclear whether or not L. thomai engages in such behaviors.In summary, based on our preliminary investigation, we found that Further, throughout our study, we did not observe any intra-specific interactions. Again, scattered observations highlight the social behavior of burrowing crayfishes, but interestingly, we did not observe any social behavior throughout the course of our study. Future studies should conduct similar behavioral observations throughout the year and also investigate potential demographic differences in the behaviors we report. By expanding the time frame in which records occur, social interactions of larger foraging excursions may be recorded. Lastly, although our study has shed light on the activity of crayfishes at the surface, much remains unknown about the subsurface behavior of burrowing crayfishes. Additionally, since we discovered that this species displayed easily identifiable postural behaviors, this may allow for automated video analysis of body position, activity, and behaviors. As such, the continued use of burrowing crayfish observations chambers will be S1 File(MP4)Click here for additional data file.S2 File(MP4)Click here for additional data file.S3 File(MP4)Click here for additional data file.S1 Table(DOCX)Click here for additional data file.S2 Table(DOCX)Click here for additional data file.S3 Table(DOCX)Click here for additional data file.S4 Table(DOCX)Click here for additional data file.S1 Fig(PNG)Click here for additional data file.S2 Fig(PNG)Click here for additional data file.S3 Fig(PNG)Click here for additional data file.S4 Fig(PNG)Click here for additional data file. 3 May 2022
PONE-D-21-39709
On the surface or down below: Field observations reveal a high degree of surface activity in a burrowing crayfish, the Little Brown Mudbug (Lacunicambarus thomai)
PLOS ONE Kaine ,Dear Dr. Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE\u2019s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.Please submit your revised manuscript by Jun 17 2022 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at\u00a0
A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.Please include the following items when submitting your revised manuscript:If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: We look forward to receiving your revised manuscript.Kind regards,Junhu Dai, Ph. D.Academic EditorPLOS ONEJournal Requirements:When submitting your revision, we need you to address these additional requirements.1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at\u00a0https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and\u00a0https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf2. In your Methods section, please provide additional information regarding the permits you obtained for the work. Please ensure you have included the full name of the authority that approved the field site access and, if no permits were required, a brief statement explaining why.3. We note that you have stated that you will provide repository information for your data at acceptance. Should your manuscript be accepted for publication, we will hold it until you provide the relevant accession numbers or DOIs necessary to access your data. If you wish to make changes to your Data Availability statement, please describe these changes in your cover letter and we will update your Data Availability statement to reflect the information you provide.Additional Editor Comments:Please modify the manuscript by strictly abiding by the reviewers' comments and their suggestions. Since one of them made a reject conclusion, you should be more careful for the revision of this time. Good luck to you.[Note: HTML markup is below. Please do not edit.]Reviewers' comments:Reviewer's Responses to Questions Comments to the Author1. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1:\u00a0YesReviewer #2:\u00a0Partly********** 2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1:\u00a0YesReviewer #2:\u00a0No********** 3. Have the authors made all data underlying the findings in their manuscript fully available?PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data\u2014e.g. participant privacy or use of data from a third party\u2014those must be specified. The Reviewer #1:\u00a0YesReviewer #2:\u00a0Yes********** 4. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.Reviewer #1:\u00a0YesReviewer #2:\u00a0Yes********** 5. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. Reviewer #1:\u00a0The authors made an interesting work, continuously filming the surface behavior of crayfish which were thought before to stay mostly or totally in underground burrows. They registered activity both during the day and night along with daily environmental factor variations and analyzed their behavior through the video recordings. While most behavioral studies in crayfish have been done in laboratory settings, this one contributes by rescuing ecological context and opening new perspectives of research. It is mostly well done but there are three general issues to be considered:1) One of the big contributions of this work is to enable discrimination of individual behavior, which is not a simple task in natural observations. Individual data presentations are fine but I suspect that when average analysis are presented, some or most of them are not considering that one individual (#2) with more than double the number of data collection is biasing the results. This can be deduced from some graphs while others do not let us know because description of average calculations are missing in Materials and Methods.2) While behavioral analysis can be made when these crayfish are on surface, nothing can be stated of their underground times. In this sense, it is an error to assign \u201cinactivity\u201d for underground times.3) Discussion needs to be improved.Comments:- Study Site: More basic description is required of the study area around the filming spot, to better explore the results. It is said that most species (tertiary and secondary burrowing crayfish) inhabit lotic and lentic environments but it is not clear how far are the Primary burrowing species of this study from water bodies, for instance. It is said in the Introduction that \u201cLacunicambarus thomai is a burrowing crayfish species with a high propensity to inhabit burrows in marshes, roadside ditches, and flooded fields (38). Populations of L. thomai often live in localized colonies with conspecifics and inhabit burrows that are relatively simple but can nonetheless be up to 1-1.5 m deep \u201d. However, nothing, no information is provided of the specific area where the study took place: proximity from rivers, vegetation types and cover densities, average ranges of daily environmental factor variations, Latitudinal coordinates.- Burrows were selected and filmed. How far is one burrow from the other? Inform the average land area inside which the 6 burrows were located. One individual is associated to each burrow \u2013 is that an assumption or are there evidences that they are solitary? Furthermore, it is said that burrows may have more than one entrance (line 67), how was this issue treated here?- It is said in Statistical Analysis that none of the independent variables were strongly correlated . A representative graph depicting an average 24h variation of environmental factors in the studied season would be informative in Supplemental Material. How was time included in the models, as a categorical or continuous variable? A continuous variable with linear increase (such as a sequence from 0 to 23) would create artificial associations in the model. Please evaluate, based on the statistical parameters found in Table 3, if it is not enough to use a simple model without interaction between time, humidity and temperature to understand the influence of environmental factors on the surface activity. The reason for this question is that the complexity of the best-fit model seems to have inhibited any discussion about the analysis in the end.- The Results section start with \u201cHourly Activity\u201d (which should be \u201cHourly surface activity\u201d) and Figure 4, but there is no explanation as to how this was calculated. It is explained in Statistical Analysis that for an individual to be considered active on surface in one specific hour and day, it needs to be seen in any time point within that hour in that day. Then, how was the group average/percentage calculated taking into account that each contributing individual was registered for a different number of days? Individuals that had more filmed days should not weight more. Finally, Legend Figure 4 needs to inform that averages were calculated taking into account all individuals and all days.An explicit description of calculation should also be added to \u201cpercentage of time spent active throughout daytime\u201d and \u201c proportion of time spent on surface\u201d. In Table 5, how was the \u201cmean duration\u201d of each behavior calculated, taking into account individuals and number of days each individual was filmed?In Figures 5, 6 and 7, it is interesting to show the \u201ccombined\u201d proportions. However, it is biased by individual 2, which was filmed for more days. Could an unbiased calculation be made here?In Figure 8 the number of observations is again biased to individual 2.- Figure 5 and associated text: comparison should be between \u201csurface\u201d and \u201cunderground\u201d, not between \u201cactive\u201d and inactive\u201d because nothing is known about what the crayfish are doing underground. The same for Figure 6: \u201cnighttime on surface\u201d and \u201cdaytime on surface\u201d. In Line 281, replace \u201c active\u201d by \u201con surface\u201d in \u201cThe percentage of time that crayfish were active during the daytime\u201d. In Line 283, \u201cRegarding nighttime activity on surface\u201d. Legend Figure 6: \u201cpercentage of time that each individual crayfish spent on surface during the day and the night\u201d.-- Discussion: In contrast to Bearden et al. (2021), this study brings more information about the behavioral complexity of this particular crayfish species. However, the lengthy discussion is mainly descriptive of results. The authors should explore, for instance, what was found in statistical analysis and how could this be connected to the specificity of the studied environment, to take full advantage of the in situ study. Another suggestion is to take Bearden et al. (2021) as a reference, discuss how the results are constrained by the particular season and microhabitat that was covered in this study.- Humidity was indicated as the most important factor modulating surface appearance in crayfish. This variable, as well as all others were collected from a meteorological station. Any thoughts about the validity of using only macro-environmental measurements in association with behaviors that are restricted to the spatial scale of the entrance of a burrow?- Temperature was shown to be a strong predictor of surface activity. A strong suggestion for future studies is to also consider underground temperature in this analysis. For instance, it has been shown in endothermic subterranean rodents that a combination of external and underground temperatures predict better the episodes of surface emergences . It is reasonable to assume that similar influence is potentially valid for these crayfish.- This crayfish display clear postural signatures that enable behavioral identification through the relative position of body coordinates . This could potentially be used in future automated video analysis of activity patterns.Minor Comments:Fig.2C: view of the cameraLine 248: remove extra \u201cwere\u201dLine 319: \u201cevery single behavior was more likely to occur during day or night compared to another\u201d An alternative could be \u201cdaily phase\u201dTable 4: what does this mean? \u201cthe activity of L. thomai was negatively related to the activity of crayfish.\u201dReviewer #2:\u00a0I understand that behaviour of strictly burrowing crayfish is difficult to study, so the information presented by the authors is surely interesting, and novel. However, the entire manuscript is based on only six individuals that were not even characterized. So, data are very preliminary and should be interpreted more cautiously. Behavioural categories should be also analysed together, considering that are dependent data. I am sorry not to be more positive, but a more specialised journal seems to be more appropriate for the manuscript.Lines 50-52, 109: for the readers it would be interesting to specify that these species are North AmericanLine 67: maybe opening is more suitable than portalLined 128-130: so how many burrows were checked before selecting only six? How about the density of the burrows in the study area?Line 132: crayfish could have been attracted out of the burrows with some baits after the footage to characterize them (for hunt behaviour it is reported they leave the burrow for example)Line 182: were the behavioural data checked for normality?Lines 218-222: those behaviours are dependent each other, so it is better to analyse them together because when crayfish are guarding, for example, they are not feeding or huntingLine 235: please delete during the studyLine 270: please 5 not in italicsI suggest merging Fig 5 and 6 in one figureTable 5: please report all the duration in seconds. Moreover, please change during with duration in the captionTables should be better drafted and presentedLine 343: please correct fiveLine 345: please better rephrase this sentenceLine 356: Loughman et al. 2018 is not present in the bibliography; 46 is Loughman et al. 2015Line 371, 448: please consider that only six individuals were observed, so I suggest being more cautious in this statementLine 381: please do not use intricacies but richness or diversityLine 415: please correct \u201cis required\u201dLine 436: I think it is \u201cuse its claws to push the mud\u201d********** 6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.If you choose \u201cno\u201d, your identity will remain anonymous but your review may still be made public.Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1:\u00a0NoReviewer #2:\u00a0Nohttps://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at\u00a0figures@plos.org. Please note that Supporting Information files do not need this step.While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool,\u00a0 18 May 2022Dear Dr. Junhu Dai,Thank you for allowing us the opportunity to revise our manuscript. The comments from the two reviewers were extremely helpful and made the manuscript much stronger. Based on their suggestions, we incorporated most suggestions to our manuscript. Below, you will find a letter detailing how we dealt with each comment of each reviewer. Where applicable, we state what we have changed and the location in which you will find our modifications in our updated manuscript. We address each reviewer comment (C#) in individual answers (A#) to keep everything organized.Reviewer #1: C1: The authors made an interesting work, continuously filming the surface behavior of crayfish which were thought before to stay mostly or totally in underground burrows. They registered activity both during the day and night along with daily environmental factor variations and analyzed their behavior through the video recordings. While most behavioral studies in crayfish have been done in laboratory settings, this one contributes by rescuing ecological context and opening new perspectives of research. It is mostly well done but there are three general issues to be considered:A1: Thank you for the generous comments. We appreciate your suggestions and we have attempted to address all your comments below. ______________________________________________________________________________C2: One of the big contributions of this work is to enable discrimination of individual behavior, which is not a simple task in natural observations. Individual data presentations are fine but I suspect that when average analysis are presented, some or most of them are not considering that one individual (#2) with more than double the number of data collection is biasing the results. This can be deduced from some graphs while others do not let us know because description of average calculations are missing in Materials and Methods.A2: We agree that analyzing behavioral data presents several issues and that this issue can be exacerbated by having an unequal sample size (263 hrs of footage for crayfish 2 vs 40 hours for crayfish 1). This is why we have chosen to present the majority of our results in terms of the percentages and not raw values. Reporting this information in terms of the raw values would certainly bias our results, as you suggest. However, by reporting this information in percentages, we believe that this is the most appropriate way to report our results and adjust for the bias. Furthermore, our manuscript reports relatively few true statistical analyses (and instead opts for more descriptive statistics of what we observed). This aligns with the primary goals of our manuscript\u2014to describe the behavioral diversity of this elusive crayfishes and demonstrate how this new methodology can be lucrative in the field of crustacean behavioral ecology. ______________________________________________________________________________C3: While behavioral analysis can be made when these crayfish are on surface, nothing can be stated of their underground times. In this sense, it is an error to assign \u201cinactivity\u201d for underground times.A3: We agree. Our manuscript can only report of the surface activity, and not any potential underground behaviors. We have altered our language throughout the entire manuscript to account for this comment. We have changed any discussion of \u201cinactivity\u201d to \u201cinactivity at the surface\u201d. ____________________________________________________________________________C4: Discussion needs to be improved.A4: We have taken several of your comments regarding topics that require additional discussion and expanded on them. ______________________________________________________________________________C5: Study Site: More basic description is required of the study area around the filming spot, to better explore the results. It is said that most species (tertiary and secondary burrowing crayfish) inhabit lotic and lentic environments but it is not clear how far are the Primary burrowing species of this study from water bodies, for instance. It is said in the Introduction that \u201cLacunicambarus thomai is a burrowing crayfish species with a high propensity to inhabit burrows in marshes, roadside ditches, and flooded fields (38). Populations of L. thomai often live in localized colonies with conspecifics and inhabit burrows that are relatively simple but can nonetheless be up to 1-1.5 m deep \u201d. However, nothing, no information is provided of the specific area where the study took place: proximity from rivers, vegetation types and cover densities, average ranges of daily environmental factor variations, Latitudinal coordinates.A5: We agree that information on our study site was lacking in the original version of the manuscript. We have now included a detailed description of the location in which we conducted this study (see Lines 127-134). Because our sampling location is on a residential property, we choose to not report exact coordinates. ______________________________________________________________________________C6: Burrows were selected and filmed. How far is one burrow from the other? Inform the average land area inside which the 6 burrows were located. One individual is associated to each burrow \u2013 is that an assumption or are there evidences that they are solitary? Furthermore, it is said that burrows may have more than one entrance (Line 67), how was this issue treated here?A6: We have updated this information within our manuscript. We now provide information regarding how far away one burrow was from one another (Lines 131-132), as well as our assumption that each burrow was occupied by a single crayfish (Lines 141-144). We also clarify how we dealt with the issue with burrows with more than a single entrance (Lines 151-153).______________________________________________________________________________C7: It is said in Statistical Analysis that none of the independent variables were strongly correlated . A representative graph depicting an average 24h variation of environmental factors in the studied season would be informative in Supplemental Material. How was time included in the models, as a categorical or continuous variable? A continuous variable with linear increase (such as a sequence from 0 to 23) would create artificial associations in the model. Please evaluate, based on the statistical parameters found in Table 3, if it is not enough to use a simple model without interaction between time, humidity and temperature to understand the influence of environmental factors on the surface activity. The reason for this question is that the complexity of the best-fit model seems to have inhibited any discussion about the analysis in the end.A7: We have now included 4 new figures in the supplemental materials which display the relevant environmental data during our study period. In the original manuscript, we reported that there were no strong statistical correlations between humidity, temperature, and time (as you mentioned). In our environmental analysis, time was coded as a continuous variable. We tested for the correlations beforehand to potentially deal with any artificial associations that you hint at. However, because of the low correlation values, we decided to proceed with using time as a continuous variable. Furthermore, the fact that time is not a strong predictor of activity (Table 4) suggests that there is no underlying association between these variables.Our best fit model was quite complex, as you note. It was a model with 3 single parameters, and two interaction parameters. This why is we not only conducted a model-selection procedure (results in Table 3), but also the multi-model average procedure (results in Table 4). The results from our full model averaging technique allow us to evaluate all of the models together and construct effect sizes and weights for each of the terms in our model. This is why we come to the conclusion that humidity had the strongest overall effect (Table 4) compared to the other models. We have now included a better discussion of this result in our discussion section (see A14).______________________________________________________________________________C8: The Results section start with \u201cHourly Activity\u201d (which should be \u201cHourly surface activity\u201d) and Figure 4, but there is no explanation as to how this was calculated. It is explained in Statistical Analysis that for an individual to be considered active on surface in one specific hour and day, it needs to be seen in any time point within that hour in that day. Then, how was the group average/percentage calculated taking into account that each contributing individual was registered for a different number of days? Individuals that had more filmed days should not weight more. Finally, Legend Figure 4 needs to inform that averages were calculated taking into account all individuals and all days.A8: We have changed this section to \u201cHourly Surface Activity\u201d as you suggest. We have clarified how this was calculated in the methods sections (Lines 201-209). Again, we report these values in terms of percentages, and not the raw number of hours each crayfish was active to avoid any potential sampling biases as you suggest. We have also updated the legend for Figure 4 to include those averages were calculated by taking into account for all individuals and all days throughout our study. ______________________________________________________________________________C9: An explicit description of calculation should also be added to \u201cpercentage of time spent active throughout daytime\u201d and \u201c proportion of time spent on surface\u201d. In Table 5, how was the \u201cmean duration\u201d of each behavior calculated, taking into account individuals and number of days each individual was filmed?A9: We have updated our methods to also include information as to how we calculated these percentages. By reporting these values in percentages, we avoid any issues based on the number of hours each crayfish was sampled and filmed. We have updated the legend in figure 5 to describe that our mean durations were calculated based on all crayfish. This data is reported to be entirely descriptive in nature, and therefore we believe that reporting such means are the most informative way to describe these different behaviors. ______________________________________________________________________________C10: In Figures 5, 6 and 7, it is interesting to show the \u201ccombined\u201d proportions. However, it is biased by individual 2, which was filmed for more days. Could an unbiased calculation be made here?A10: Reporting our results in terms of percentages is the best way to provide an unbiased calculation for these figures, which are primarily meant to be descriptive and exploratory in nature. If we would have reported the data from the raw amount of time that each behavior/activity was performed, then this would certainly be biased. This is why we chose to reported the data in terms of percentages throughout the manuscript. ______________________________________________________________________________C11: In Figure 8 the number of observations is again biased to individual 2.A11: Yes, Figure 8 is biased based on the increased number of observations from individual 2. This figure are describing trends in whether or not a specific behavior was more or less likely to occur at day versus night. We have included information regarding this issue in our manuscript (Lines 201-203). Anecdotally, if you look at the percentages of all behaviors reported in Fig 5, 6, and 7, there is a similar degree of surface activity/behaviors being exhibited by each crayfish, which implies that this bias may be minimally impacting our results. ______________________________________________________________________________C12: Figure 5 and associated text: comparison should be between \u201csurface\u201d and \u201cunderground\u201d, not between \u201cactive\u201d and inactive\u201d because nothing is known about what the crayfish are doing underground. The same for Figure 6: \u201cnighttime on surface\u201d and \u201cdaytime on surface\u201d. In Line 281, replace \u201c active\u201d by \u201con surface\u201d in \u201cThe percentage of time that crayfish were active during the daytime\u201d. In Line 283, \u201cRegarding nighttime activity on surface\u201d. Legend Figure 6: \u201cpercentage of time that each individual crayfish spent on surface during the day and the night\u201d.A12: You are correct; we do not want to mislead the readers. Now, all of our figures have been accounted for the fact that we are only reporting surface activity and that we cannot comment on underground. Activity. Figures 5 and 6 are now updated accordingly. We have also changed the legend for Figure 6______________________________________________________________________________C13: Discussion: In contrast to Bearden et al. (2021), this study brings more information about the behavioral complexity of this particular crayfish species. However, the lengthy discussion is mainly descriptive of results. The authors should explore, for instance, what was found in statistical analysis and how could this be connected to the specificity of the studied environment, to take full advantage of the in situ study. Another suggestion is to take Bearden et al. (2021) as a reference, discuss how the results are constrained by the particular season and microhabitat that was covered in this study.A13: We agree that we could have expanded on our discussion section. Based on many of your suggestions below we believe that the discussion has significantly improved. ______________________________________________________________________________C14: Humidity was indicated as the most important factor modulating surface appearance in crayfish. This variable, as well as all others were collected from a meteorological station. Any thoughts about the validity of using only macro-environmental measurements in association with behaviors that are restricted to the spatial scale of the entrance of a burrow?A14: We agree that the macro environmental variables that we relate to surface activity are far from ideal for such studies. We have updated our discussion to point this out and suggest that future studies take into account the variables potential micro-environmental influences within and around the burrow entrances (Lines 406-422). ______________________________________________________________________________C15: Temperature was shown to be a strong predictor of surface activity. A strong suggestion for future studies is to also consider underground temperature in this analysis. For instance, it has been shown in endothermic subterranean rodents that a combination of external and underground temperatures predict better the episodes of surface emergences . It is reasonable to assume that similar influence is potentially valid for these crayfish.A15: This is a great suggestion to add to our discussion section. We have now included that looking at the underground temperature variation is a fruitful area for future directions (Lines 416-422). ______________________________________________________________________________C16: This crayfish display clear postural signatures that enable behavioral identification through the relative position of body coordinates . This could potentially be used in future automated video analysis of activity patterns.A16: That is a great suggestion to add the discussion. We have included these postural changes to be used in the identification of automated video analysis in the future (Lines 511-513). ______________________________________________________________________________C17: Fig.2C: view of the cameraA17: We have edited this sentence accordingly.______________________________________________________________________________C18: Line 248: remove extra \u201cwere\u201dA18: We have removed the extra \u201cwere\u201d______________________________________________________________________________C19: Line 319: \u201cevery single behavior was more likely to occur during day or night compared to another\u201d An alternative could be \u201cdaily phase\u201dA19: We believe that the current wording provides for clarity because \u201cdaily phase\u201d may be confused with \u201cday\u201d. ______________________________________________________________________________C20: Table 4: what does this mean? \u201cthe activity of L. thomai was negatively related to the activity of crayfish.\u201dA20: This was a typo and should have read \u201cThus, the activity of L. thomai was negatively related to the degree of environmental humidity.\u201d We have made this change to the Table 4 text. Thank you.______________________________________________________________________________Reviewer #2: C21: I understand that behaviour of strictly burrowing crayfish is difficult to study, so the information presented by the authors is surely interesting, and novel. However, the entire manuscript is based on only six individuals that were not even characterized. So, data are very preliminary and should be interpreted more cautiously. Behavioural categories should be also analysed together, considering that are dependent data. I am sorry not to be more positive, but a more specialised journal seems to be more appropriate for the manuscript.A21: We agree that the fact that our study being only conducted on six individuals is a limitation to our study. Low sample sizes are typical for this type of work, because of the time intensive nature and complexities of naturalistic studies Although this work was only conducted on six individuals (that have unknown demographic information), we believe that the amount and nature of our data is worthy of publication. In the previous and updated versions of our manuscript, we attempt not to overstate our results as they are preliminary and exploratory in nature. Furthermore, although we do not report the demographic data from these crayfishes , there is no data that has alluded to sex differences in burrowing crayfish behavior. We do agree that this is an interesting and important angle for future studies though, so we have included that this information should be investigated in the future (Lines 507-508).Despite these limitations, we still believe that we provide a novel methodology and interesting results (which reviewer 1 highlights) that will be of interest to a wide audience. Because the scope of PLOS ONE is to publish papers based on their scientific validity and methodology, we believe that our study is of interest to an audience outside of smaller, taxonomic focused journals.We respond to the comment on behavioral categories being analyzed together below (see A27).______________________________________________________________________________C22: Lines 50-52, 109: for the readers it would be interesting to specify that these species are North AmericanA22: We have edited Lines 50-52 to clarify that these are North American species (Line 50-51) ______________________________________________________________________________C23: Line 67: maybe opening is more suitable than portalA23: Portal is a term burrowed from the literature on mammalian burrowing behavior and is widely used to refer to crayfish burrow openings. Therefore, we prefer to keep this language consistent with prior published papers. We have included a few examples below. Glon, M. G., Adams, S. B., Loughman, Z. J., Myers, G. A., Taylor, C. A., & Schuster, G. A. (2020). Two new species of burrowing crayfish in the genus Lacunicambarus (Decapoda: Cambaridae) from Alabama and Mississippi. Zootaxa, 4802(3), 401-439.Loughman, Z. J. (2010). Ecology of Cambarus dubius (upland burrowing crayfish) in north-central West Virginia. Southeastern Naturalist, 9(sp3), 217-230.______________________________________________________________________________C24: Lined 128-130: so how many burrows were checked before selecting only six? How about the density of the burrows in the study area?A24: We have now included much more information regarding our study location. Based on the relatively small number of burrows at the study location, we choose to only focus on a small sample size but to collect as much data as possible on each of these adult individuals. ______________________________________________________________________________C25: Line 132: crayfish could have been attracted out of the burrows with some baits after the footage to characterize them (for hunt behaviour it is reported they leave the burrow for example)A25: Yes, that is true. Unfortunately, we did not capture these individuals. We have included information how future studies need to explore how different demographics may exhibit such behaviors differently (lines 50-51). ______________________________________________________________________________C26: Line 182: were the behavioural data checked for normality?A26: Yes, all data and model fits were checked for normality. We have now included this information in the manuscript (Lines 197-199).______________________________________________________________________________C27: Lines 218-222: those behaviours are dependent each other, so it is better to analyse them together because when crayfish are guarding, for example, they are not feeding or huntingA27: This comment highlights the complexities of working with behavioral data, because an organism cannot perform more than a single behavior at once. Therefore, this is why we chose to primarily focus on broad, descriptive statistics throughout our study. Because of the novelty of these findings and the potential impact on our understanding of crustacean behavioral ecology, we believe that this is the proper way to analyze report our data at this stage. ______________________________________________________________________________C28: Line 235: please delete during the studyA28: We have deleted this phrase.______________________________________________________________________________C29: Line 270: please 5 not in italicsA29: We have unitalicized this number.______________________________________________________________________________C30: I suggest merging Fig 5 and 6 in one figureA30: We prefer to keep these figures separate as to avoid confusion between the messages of figure 5 (observed at surface vs. underground) and figure 6 (observed at surface during the day versus observed at surface during the night).______________________________________________________________________________C31: Table 5: please report all the duration in seconds. Moreover, please change during with duration in the captionA31: We have changed the table to report each behavior in. We have changed during to duration as you suggested. ______________________________________________________________________________C32: Tables should be better drafted and presentedA32: Without any comments on the issues of the tables, we are unsure what to change base. ______________________________________________________________________________C33: Line 343: please correct fiveA33: We change made this change. ______________________________________________________________________________C34: Line 345: please better rephrase this sentenceA34: We have rephrased this sentence accordingly (Lines 366-368).______________________________________________________________________________C35: Line 356: Loughman et al. 2018 is not present in the bibliography; 46 is Loughman et al. 2015A35: Thank you for noticing this issue. We meant Loughman et al. 2015 and we have made this change. ______________________________________________________________________________C36: Line 371, 448: please consider that only six individuals were observed, so I suggest being more cautious in this statementA36: We agree that we need to be more cautious with these statements. We have added additional information to these sections based on your comments (see Lines 394-395 and Lines 494-495).______________________________________________________________________________C37: Line 381: please do not use intricacies but richness or diversityA37: We have changed the work intricacies to richness as you have suggested. ______________________________________________________________________________C38: Line 415: please correct \u201cis required\u201dA38: We have changed \u201care required\u201d to \u201cis required\u201d______________________________________________________________________________C39: Line 436: I think it is \u201cuse its claws to push the mud\u201dA39: Thank you for catching this error. We have fixed the typo.Attachmentresponse_to_reviewers_1.docxSubmitted filename: Click here for additional data file. 19 Jul 2022
PONE-D-21-39709R1
On the surface or down below: Field observations reveal a high degree of surface activity in a burrowing crayfish, the Little Brown Mudbug (Lacunicambarus thomai)
PLOS ONEDear Dr. Diehl,Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE\u2019s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.
Since I had difficulties finding a second Reviewer, I decided to review your revised manuscript myself. As you will see, the negative Reviewer of your first submission appreciated your corrections and judged that you contribution is now ready for publication. However, I found some problems that need to be settled before your paper is accepted. The main point concerns difficulties to link the results given in the text with those in the figures. I think that there may be some mistakes. For this reason, please carefully check your Result section. Also, I made several editorial recommendations. You will find all this information in the attached file \"D-21-39709_R1_LFB.pdf\". I appreciated your work and think that it will be a useful contribution to crayfish biology.\u00a0plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.Please submit your revised manuscript by Sep 02 2022 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at\u00a0Please include the following items when submitting your revised manuscript:A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.
If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: We look forward to receiving your revised manuscript.Kind regards,Louis-Felix Bersier, Ph.D.Academic EditorPLOS ONEJournal Requirements:Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article\u2019s retracted status in the References list and also include a citation and full reference for the retraction notice.Additional Editor Comments:See attached file \"D-21-39709_R1_LFB.pdf\"[Note: HTML markup is below. Please do not edit.]Reviewers' comments:Reviewer's Responses to Questions Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the \u201cComments to the Author\u201d section, enter your conflict of interest statement in the \u201cConfidential to Editor\u201d section, and submit your \"Accept\" recommendation.Reviewer #2:\u00a0All comments have been addressed********** 2. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #2:\u00a0Yes********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #2:\u00a0Yes********** 4. Have the authors made all data underlying the findings in their manuscript fully available?PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data\u2014e.g. participant privacy or use of data from a third party\u2014those must be specified. The Reviewer #2:\u00a0Yes********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.Reviewer #2:\u00a0Yes********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. Reviewer #2:\u00a0I appreciate the responses and corrections provided by the authors, I do not have further comments to be addressed********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.If you choose \u201cno\u201d, your identity will remain anonymous but your review may still be made public.Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #2:\u00a0No**********https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at\u00a0figures@plos.org. Please note that Supporting Information files do not need this step.
While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool,\u00a0AttachmentPONE-D-21-39709_R1_LFB.pdfSubmitted filename: Click here for additional data file. 6 Aug 2022Dear Dr. Louis-Felix Bersier, Thank you for allowing us the opportunity to revise our manuscript. Your comments were extremely helpful and made the manuscript much stronger. Based on their suggestions, we incorporated most suggestions to our manuscript. Below, you will find a letter detailing how we dealt with each comment you provided. We address each comment (C#) in individual answers (A#) to keep everything organized.Editor Comments: C1: This level of precision is not necessary. I recommend rounding at unity here (21% to 69%)A1: We have rounded these percentages. ______________________________________________________________________________C2: Could be abridged as \"tertiary-, secondary-, and primary burrowing species\"A2: This is a good suggestion, we have changed the text accordingly.______________________________________________________________________________C3: deleteA3: We have deleted the words \u201cthe setup of\u201d here. ______________________________________________________________________________C4: I guess \"... unable to report ...\"A4: Yes, you are correct. We have edited the text here. ______________________________________________________________________________C5: Meaning not clear to me. If it indicates that this behavior occurs mostly during the night, it should not be part of the description in the table, but should be stated in the results.A5: This statement (more exposed during the night) has been deleted because we agree that it is unclear. We meant to mean that when the crayfish exhibit the guard behavior during the night, they are typically more exposed out of their burrow portal. But we did not quantify this, so we chose to delete it from the text. ______________________________________________________________________________C6: The images are very useful and raise a question since these two behaviors are very similar. As a non-specialist, I would like some words (and possibly a reference) that explain and justify this distinction.A6: After some discussion, we agree with all of your comments regarding our denotation of these behaviors of \u201cguard\u201d and \u201crest\u201d behaviors. We now refer to both of these behaviors as \u201crest\u201d and separate them based on \u201crest-claws open/rest - open\u201d and \u201crest-claws joined/rest \u2013 joined\u201d. Although rest \u2013 closed may represent a guarding behavior, we believe that because this work is preliminary, we are keeping the language simple. The manuscript and figures have been changed accordingly. We believe that this language will properly describes the behaviors of the crayfishes would making any presumptions on the function of the posture. ______________________________________________________________________________C1: There may be an unimodal relationships with these variable, reflecting optima. Did you check for this possibility ?A1: We did check for this, but because there was not as much variation in the environment variables, we did not find any optimum relationship between activity and such environmental variables. ______________________________________________________________________________C1: Personally, I would place the Tables 3 and 4 in the Supporting Information (only a suggestion).A1: We have moved Table 3 and Table 4 to the SI as you have suggested. ______________________________________________________________________________C1: This should be part of the Method section, e.g., on line 216, in the parenthesis \"A1: Thank for noticing this mistake. Yes, the colors on the figure legend should be reversed. in Figure 5. We have fixed this accordingly. Now the numbers and percentages add up correctly. ______________________________________________________________________________C1: Again, I have problem with the values in the text and in Table 5. Crayfish 4 is the only one that foraged during the day. From Table 5, this happened only once, with a total of 360s, which is clearly not equal to 0.34 hr. Please explain.A1: After checking back at our data, you are correct, we mistakenly calculated a larger percentage for the daytime activity of Crayfish 4. This change is now reflected in the manuscript and in the Figure 7, and Table 5. We re-checked the other crayfish as well and the results are correct for them. ______________________________________________________________________________C1: Same remark as for Tables 3 and 4. The results are different but redundant with Fig. 7.A1: We have moved this Table (Table 5) to the SI as you have suggested. ______________________________________________________________________________C1: But Fig. 7 shows the opposite pattern! Table 5 support this.A1: Thank you for catching this error. This was a typo and now correctly states that \u201cThe majority of their time was spent in the relax position during the night, and relaxing was more likely to occur during the night compared to during the day.\u201d______________________________________________________________________________C1: I think that \"proportion\" is more adequate here.A1: We have reworded this to proportion based on your suggestion. ______________________________________________________________________________C1: Again, I would place this Table in the SI, as the main information can be visualized in Fig. 8.A1: We have moved this table to the SI based on your suggestions. ______________________________________________________________________________C1: This relates to my question concerning Fig. 3 (line 183). Does reference 51 separates these two behaviors based on claws' position?A1: See A6. ______________________________________________________________________________C1: At this point, I am uncomfortable with your choice of words (relax and guard) for the two behaviors based on position of claws. Perhaps \"guard - claws open\" and \"guard - claws joined\" may be more appropriate?A1: See A6.______________________________________________________________________________Attachmentresponse_to_reviewers_2.docxSubmitted filename: Click here for additional data file. 11 Aug 2022On the surface or down below: Field observations reveal a high degree of surface activity in a burrowing crayfish, the Little Brown Mudbug (Lacunicambarus thomai)PONE-D-21-39709R2Dear Dr. Diehl,We\u2019re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.Within one week, you\u2019ll receive an e-mail detailing the required amendments. When these have been addressed, you\u2019ll receive a formal acceptance letter and your manuscript will be scheduled for publication.http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at onepress@plos.org.If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they\u2019ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact Kind regards,Louis-Felix Bersier, Ph.D.Academic EditorPLOS ONEAdditional Editor Comments :Thank you for your thorough consideration of my comments. I went through your corrected version and found that it is ready for publication. I just noted a probable mistake on lines 243-244: \"\" rather than \"\". Please check this before submitting your final document.Reviewers' comments: 2 Sep 2022PONE-D-21-39709R2 Lacunicambarus thomai) On the surface or down below: Field observations reveal a high degree of surface activity in a burrowing crayfish, the Little Brown Mudbug (Dear Dr. Diehl:I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department. onepress@plos.org.If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact plosone@plos.org. If we can help with anything else, please email us at Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staffon behalf ofProf Louis-Felix Bersier Academic EditorPLOS ONE"} +{"text": "Extractive document summarization (EDS) is usually seen as a sequence labeling task, which extracts sentences from a document one by one to form a summary. However, extracting sentences separately ignores the relationship between the sentences and documents. One solution is to use sentence position information to enhance sentence representation, but this will cause the sentence-leading bias problem, especially in news datasets. In this paper, we propose a novel sentence centrality for the EDS task to address these two problems. The sentence centrality is based on directed graphs, while reflecting the sentence-document relationship, it also reflects the sentence position information in the document. We implicitly strengthen the relevance of sentences and documents by using sentence centrality to enhance sentence representation. Notably, we replaced the sentence position information with sentence centrality to reduce sentence-leading bias without causing model performance degradation. Experiments on the CNN/Daily Mail dataset showed that EDS models with sentence centrality significantly improved compared with baseline models. Automatic document summarization aims to produce a concise summary of a document while preserving its crucial information. Existing summarization methods can be divided into two categories: abstractive and extractive methods. Abstractive methods generate a summary word by word from scratch, and can introduce new words that do not appear in the document . ExtractIn recent years, extractive document summarization (EDS) based on neural networks has achieved great success \u20136. HowevThere is much excellent work based on the first approach. For example, Zhang et al. proposedThe sentence centrality is usually based on undirected graphs and is widely used in unsupervised extractive summarization tasks to identify salient sentences in a document , 14. In Previous work considered sentence centrality as a signal to measure the importance of sentences \u201315. DiffFollowing our intuition mentioned before, we can learn that the sentence centrality is no longer restricted in unsupervised extractive methods. We develop two methods to apply sentence centrality to enhance sentences representation: (1) embedding the sentence centrality directly into the sentence representation output by the encoder; (2) updating the sentence representation indirectly via Graph Attention Network (GAT) . We builThe contributions of our work are as follows:We propose a novel sentence centrality for EDS task and two approaches to use sentence centrality to enhance sentence representation. With the help of the sentence centrality, the relationship between sentences and documents is implicitly strengthened, thus improving the performance of the extractive summarization.We propose to replace sentence position information with sentence centrality, which can reduce the sentence-leading bias in the news dataset caused by position information.The remainder of this article is arranged as follows. We introduce some related topics on the EDS in the section Related Work. In the Method section, we define the EDS and then introduce our sentence centrality-enhanced extractive summarization models. We present the training details, parameter settings and experimental results in the Experiment section. In the Discussion section, we discuss why sentence centrality works. Finally, we conclude our paper in the Conclusion section.To make the paper self-contained, we will introduce some related topics on the EDS and the sentence centrality-based summarization methods.The EDS task aims to extract sentences from the original document to form a summary. The task first encodes the sentences with the help of an encoder to obtain a sentence vector. The sentence vector is then passed through a classification layer to determine whether it should be included in the summary. Nallapati et al ; Zhou etAlthough these methods are effective, they mostly rely on sentence position information to enhance sentence representation. We introduce sentence centrality information in the model and remove sentence position information, which improves model performance and does not cause sentence-leading bias.The sentence centrality is often used to measure the importance of a sentence in unsupervised EDS tasks. In the task, a document is represented as a graph, with nodes representing sentences and edges connecting sentences weighted according to their similarity. TextRank calculatThere are three key differences in our sentence centrality compared to the previous methods. (1) We calculate the centrality of a sentence by counting only the similarity between that sentence and the content that follows it, not the similarity of all other content. (2) The centrality of a sentence is considered a unique property of the associated sentence and document rather than just as a measure of the importance of the sentence. Therefore, we use sentence centrality to strengthen the sentence representation. (3) We applied sentence centrality into the supervised EDS.An essential step in the extractive document summarization task is to obtain sentence embeddings. Traditional sentence embedding methods are based on weighting and averaging words vectors to construct sentences\u2019 vectors. Kedzie et al. averagedTraditional sentence embedding methods are simple and effective. However, extractive document summarization is a document-level task, and the relationship between sentences and documents needs to be considered when obtaining sentence embeddings. Most works , 20, 21 Different from their work, we use sentence center information to enhance sentence representations. Compared to using sentence position information, our methods are able to achieve performance improvements while reducing the sentence lead bias problem.d that contains n sentences, d = {s1, s2, \u2026, sn}, where si = {wi1, wi2, \u2026, wim} is the i-th sentence in the document and wij is the j-th word in the i-th sentence. EDS can be seen as a sequence labeling task 2] to enhHeterogeneous summarization graph (HSG) is an exeij, which are mapped to the multi-dimensional embedding space. The weights of the edge eij are the sum of the sentence centrality and the TF-IDF value of the words, because the types of nodes connected by the edge are different. The modified GAT layer is designed as follows:hi is the hidden state of input node, \u03b1ij refers to the weight of attention between hi and hj. The residual connection is used to avoid gradient vanishing after several iterations. The final sentence representation is:Our modified HSG model is presented in G with word features Xw and sentence node features Xs, the sentence nodes are updated with their neighbor word nodes via the above GAT and feed-forward (FFN) layer:Given a constructed graph We conduct our experiment on the CNN/Daily Mail, XsCNN/Daily Mail is a well-known news dataset for single document extractive summarization, which is split into three parts by Hermann et al. for traiXSum is a one-sentence summary dataset to answer the question \u201cWhat is the article about?\u201d. We conduct experiments on this dataset to study whether sentence centrality-enhanced EDS models are still effective when dealing with dataset with short summaries.We only use the XSum dataset for ablation experiments, as the extraction results on this dataset are few and are insufficient to support our model performance comparison.We limit the sentence length to 50 words calculating sentence centrality. Both models are trained on a single GPU (GeForce RTX 3080).https://github.com/huggingface/pytorch-pretrained-BERT. The model is trained for 40000 steps. The best result on the validation set occurs at step 37000. Adam algorithm is applied for optimizing the loss function. Learning rate schedule is following Vaswani et al. Reviewers' comments:Reviewer's Responses to Questions Comments to the Author1. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1:\u00a0YesReviewer #2:\u00a0Partly********** 2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1:\u00a0N/AReviewer #2:\u00a0Yes********** 3. Have the authors made all data underlying the findings in their manuscript fully available?PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data\u2014e.g. participant privacy or use of data from a third party\u2014those must be specified. The Reviewer #1:\u00a0YesReviewer #2:\u00a0Yes********** 4. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.Reviewer #1:\u00a0YesReviewer #2:\u00a0Yes********** 5. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. Reviewer #1:\u00a0The manuscript \u201cImproving Extractive Document Summarization with Sentence Centrality\u201d highlights a way to enhance sentence representation for extractive document summarization which in turns boost performance of existing EDS techniques. The paper is well written and contributions are clearly laid out and explained, used methodology is sound. Still some issues are cracking through. The following is a list of items which should be addressed:1. In the introduction, the manuscript references findings of Zheng and Lapata that \u201csimilarity with the previous content will damage centrality\u201d. In the method proposed here, forward edges are removed and only backward edges are considered, but this seems confusing. Why is this the right approach if this tends to damage centrality score?2. Acronym EDS in line 10 is not defined \u2013 please add full text of the meaning.3. The structure of the paper is missing at the end of Introduction.4. While being sufficient example of a graph, figure 1 does not convey idea of degree centrality in a meaningful way. Maybe node size or colour should be varied based on centrality score to reinforce the idea that some sentences are more important than the other.5. In the approach involving heterogeneous graph neural network, edge features were extended with sentence centrality. If sentence centrality is a node characteristic, why is it used to enhance edge features?6. In the equation 6, indices in LHS appear to be duplicated. Equation 7 contains similar problem. Although it may be obvious to knowledgeable reader, terms of the equation 11 should still be defined for rigour, and it should be done for all other equations as well. For example, in equation (1) hi and hi+ are not defined.7. It is not quite clear how is centrality embedding (EmbSCi) is obtained. Specifically, what are the terms on the RHS of the equation 9? If SCi is centrality of sentence i, what is the meaning of the exponents 1 to emb? Furthermore, equation 9 defines EmbSCi as a set of terms SCi, but this seems unusual w.r.t embeddings usually being vectors.8. Please define HSG in line 168.9. Please add short explanation of Trigram Blocking.10. How does table 3 relate to table 1 and table 2? Are these various configurations of input data to SCBERT and SCHSG models from previous tables? If it is indeed so, please make sure it is more clear from the text to prevent any misunderstandings.11. Please cite everything which is not original work presented in the paper, e.g. ROUGE, bert-base-uncased model, and some other instances.12. Please in the appendix define used ROUGE scores.13. \u201cAnalysis\u201d section contains reference to \u201cORACLE summary\u201d, please elaborate some more on what the ORACLE is, how does it work, and cite the relevant paper.14. Consider renaming Analysis Chapter into Discussion and expand it.15. Axes on figure 3 should be appropriately labelled to make the figure stands on it\u2019s own.16. It would be useful to know how the proposed model performs with other similar node-local measures such as selectivity measure. It might be useful as a basis for future work.17. Introduction contains abbreviation \u201cEDT\u201d which is never elaborated or mentioned again. This seems like a very minor typographical error, and presumably was meant to say \"EDS\". In the same vein, \u201cAblation Study\u201d section contains sentence \u201cThe results show that the experimental performance on ROUGE-L, ROUGE-2, ROUGE-L\u201d. Is the first ROUGE-L in line 244 ROUGE-1?18. On several places there is a blank after comma missing.Reviewer #2:\u00a0### Overview and general recommendationThe paper addresses the problem of extractive document summarization. It uses sentence centrality information to enhance sentence representation. This information should reflect the sentence-document relationship and the sentence position information as well. The sentence representation is enhanced in two ways: one is embedded directly into the sentence representation output and the other updates the sentence representation indirectly via a graph attention network.Because of advances in abstractive summarization, the task of extractive document summarization is probably no longer very challenging or interesting for the research field. The paper provides a good method for comparing the impact of sentence position vs. sentence centrality. However, one of the conclusions is not well supported by the experimental results, but overall, this is a very well-written paper with sound methodology, ablations studies, and a well-formulated research question.### Major comments- According to Table 3, the results of different sentence information don't support the conclusion that sentence centrality is a better choice than sentence position. Given that ROUGE is a poor metric, the difference of approximately 0.1 is insignificant.- More ideas for future research could be added to the conclusion.- The related research section could be expanded as well, e.g., adding a section on sentence embeddings, since the paper works on that level of representation.- Some typos should be fixed, and some sentences could be rewritten to make them more clear; Take a look at minor remarks- There is no link to the code repository.### Minor comments- Typo in the caption of Fig. 1, \u201cdocumentt\u201d- Line 176: \u201cusing\u201d \u2192 \u201cuse\u201d- Table 3: \u201csummarizers\u201d \u2192 \u201csummarizer\u201d- Fig. 2: the first sentence should be corrected- Line 244: the first \u201cROUGE-L\u201d should be \u201cROUGE-1\u201d********** 6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.If you choose \u201cno\u201d, your identity will remain anonymous but your review may still be made public.Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1:\u00a0NoReviewer #2:\u00a0Nohttps://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at\u00a0figures@plos.org. Please note that Supporting Information files do not need this step.While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool,\u00a0 18 Apr 2022We greatly appreciate the reviewers taking the time to provide constructive comments and helpful suggestions. There is no doubt that the suggestions have significantly raised the quality if the manuscript and have enable us to improve the manuscript. Each suggested revision and comment brought forward by the reviewers was accurately incorporated and considered. We have carefully addressed all the reviewer's concerns. Please see our replies. Changes highlighted in red have been made accordingly in the revised manuscript.Comments to the Author________________________________________1. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes Reviewer #2: PartlyResponse: Thanks for your review. We added experiments to verify the effectiveness of our method. We conducted experiments on XSum dataset. The results demonstrate the superiority of sentence centrality compared to positional information.________________________________________2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: N/A Reviewer #2: Yes Response: Thank you for agreeing that our analysis is appropriate and rigorous.________________________________________3. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data\u2014e.g. participant privacy or use of data from a third party\u2014those must be specified. Reviewer #1: Yes Reviewer #2: Yes Response: Thank you for your approval of our data.________________________________________4. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: YesResponse: Thank you for your approval.________________________________________Review Comments to the Author________________________________________Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. Reviewer #1: The manuscript \u201cImproving Extractive Document Summarization with Sentence Centrality\u201d highlights a way to enhance sentence representation for extractive document summarization which in turns boost performance of existing EDS techniques. The paper is well written and contributions are clearly laid out and explained, used methodology is sound. Still some issues are cracking through. The following is a list of items which should be addressed: 1. In the introduction, the manuscript references findings of Zheng and Lapata that \u201csimilarity with the previous content will damage centrality\u201d. In the method proposed here, forward edges are removed and only backward edges are considered, but this seems confusing. Why is this the right approach if this tends to damage centrality score?Response: We appreciate the reviewer for asking questions about the details of sentence centrality. According to the original paper of Zheng and Lapata, the centrality score of s_i based on the directed graph can be defined as follows:centrality(s_i )=\u03bb_1 \u2211_(ji)e_ij ,where the optimal \u03bb_1 tends to be negative.In our paper, we do not calculate the similarity between the sentence and its previous content of the sentence, which means that we set the weights of forward-looking directed edges \u03bb_1 are equal to 0. We think the descriptions of \u201cforward-looking\u201d and \u201cforward\u201d make our point unclear, so we revised this part of content to convey our idea clearly, in lines 43-45. The description of \u201cInspired by their work, we remove the forward edges of sentences on directed graphs and calculate the sentence centrality based only on the weights of backward edges\u201d is modified by us to \u201cInspired by their work, we calculate the centrality score of a sentence based only on the similarity between the sentence and its following content\u201d.________________________________________2. Acronym EDS in line 10 is not defined \u2013 please add full text of the meaning.Response: We appreciate the reviewer for his/her careful review. We have added the full text of the meaning of EDS in line 10.________________________________________3. The structure of the paper is missing at the end of Introduction.Response: We thank the reviewer for reminding us to describe the structure of our paper, and there is no doubt that this suggestion makes the paper more readable. We have added a description of the paper structure in lines 76-81.________________________________________4. While being sufficient example of a graph, figure 1 does not convey idea of degree centrality in a meaningful way. Maybe node size or colour should be varied based on centrality score to reinforce the idea that some sentences are more important than the other.Response: We thank the reviewer for the helpful suggestions on our figure 1. We now updated the figure 1. we used different colors to indicate different sentences, and used the size of the nodes to indicate the centrality scores. We also increased the number of sentence nodes in order to convey idea our ideas more clearly. ________________________________________5. In the approach involving heterogeneous graph neural network, edge features were extended with sentence centrality. If sentence centrality is a node characteristic, why is it used to enhance edge features?Response: We are grateful to the reviewer for questions about our method. We use sentence centrality to enhance edge features is to modify the graph attention (GAT) layer. In the heterogeneous graph neural network, the sentence representations are updated with their neighbor word nodes via a GAT layer and feed-forward (FFN) layer. The GAT layer is modified by infusing the scalar edge weights e_ij (described in equation 13), which are mapped to the multi-dimensional embedding space. The weights of the edge e_ij are the sum of the sentence centrality and the TF-IDF value of the words, because the types of nodes connected by the edge are different.We added equations and a textual explanation to describe how sentence representations are updated by GAT and FFN in equation 17, equation 18 and lines 228-236. We also do a textual explanation of why we use sentence centrality to enhance edge features in lines 216-221.________________________________________6. In the equation 6, indices in LHS appear to be duplicated. Equation 7 contains similar problem. Although it may be obvious to knowledgeable reader, terms of the equation 11 should still be defined for rigour, and it should be done for all other equations as well. For example, in equation (1) hi and hi+ are not defined.Response: We appreciate the reviewer for his/her careful review and we feel sorry for our lack of rigour. We have corrected equation 6 and equation 7. For terms in the equations that are not strictly defined, we have carefully checked.For equation 1, we defined h_i and h_i^+, and explained the meaning of sim in lines 150-152.For equation 6, we explained the meaning of the term u_ij in line 184.For equation 9 and equation 10, we modified these two equations to make the meaning of \u3016EmbSC\u3017_i clearer. We defined each term in lines 187-192.For equation 11, we define each term of the equation and explain the meaning in lines198-200. ________________________________________7. It is not quite clear how is centrality embedding (EmbSCi) is obtained. Specifically, what are the terms on the RHS of the equation 9? If SCi is centrality of sentence i, what is the meaning of the exponents 1 to emb? Furthermore, equation 9 defines EmbSCi as a set of terms SCi, but this seems unusual w.r.t embeddings usually being vectors.Response: We appreciate the reviewer for his/her careful review. Our definition of equation 9 is not rigorous, leaving our point unclear. We feel sorry for this. We modified the equation 9. Now the equation 9 is:EmbSC_i=W_sc (SC_i ) ,where W_sc is a weight matrix with the weights set to 1. EmbSC_i is the centrality embedding of sentence s_i, which has the same dimension as the sentence embedding.EmbSC_i is obtained by mapping the normalized scalar sentence centrality to the multi-dimensional embedding space. The RHS of the new equation 9 W_sc (SC_i ) means that (SC_i ) is mapped to a higher dimensional space by W_sc. The RHS of our previous equation 9 was trying to convey the same meaning as the new equation 9, but we are sorry that we did not make it clearer. The exponents 1 to emb means that we map sentence centrality to the emb-dimensional space in the previous equation 9. ________________________________________8. Please define HSG in line 168.Response: We thank the reviewer for reminding us to define HSG. We add full text of the meaning HSG in lines 202-204, and give our sentence centrality-enhanced extractive document summarization model based on HSG in Fig.3.________________________________________9. Please add short explanation of Trigram Blocking.Response: We thank the reviewer for reminding us to add short explanation of Trigram Blocking. We added the short explanation of Trigram Blocking in lines 270-273. ________________________________________10. How does table 3 relate to table 1 and table 2? Are these various configurations of input data to SCBERT and SCHSG models from previous tables? If it is indeed so, please make sure it is clearer from the text to prevent any misunderstandings.Response: We thank the reviewer for his/her careful review. All the models in Table 3 are based on the pre-trained language model BERT, where the SCES model is exactly our SCBERT in Table 1. The various configurations of experimental parameters for the SCES model are the same as for the SCBERT model, except that the datasets used are different. We added the relevant description in lines 324-327.________________________________________11. Please cite everything which is not original work presented in the paper, e.g. ROUGE, bert-base-uncased model, and some other instances.Response: We thank the reviewer for his/her careful review. We checked our paper carefully and added citations to the work that needed to be cited.Line 17, we added a reference to transformer.Line 213, we added a reference to Convolutional Neural Network. Line 216, we added a reference to Bidirectional Long and Short-Term Memory.Line 239, we added references to CNN/Daily Mail and XSum datasets.Line 297, we added references to ROUGE.https://github.com/huggingface/pytorch-pretrained-BERT, we put the URL in lines 256-257.\u201cbert-base-uncased model\u201d is published in ________________________________________12. Please in the appendix define used ROUGE scores.Response: We thank the reviewer for reminding us to add the definition of ROUGE scores. We defined the used ROUGE scores in S1 Appendix.________________________________________13. \u201cAnalysis\u201d section contains reference to \u201cORACLE summary\u201d, please elaborate some more on what the ORACLE is, how does it work, and cite the relevant paper.Response: We thank the reviewer for his/her rigorous review. We elaborated the ORACLE summary in lines 353-359, including that what the ORACLE is, how does it work. We also cite the relevant paper.s________________________________________14. Consider renaming Analysis Chapter into Discussion and expand it.Response: We thank the reviewers for the suggestions on the structure of our article.We have renamed Analysis Chapter into Discussion. In this part, we added the content of ORACLE according to your comment 13. We discuss why sentence centrality is a better choice than sentence position information.________________________________________15. Axes on figure 3 should be appropriately labelled to make the figure stands on it\u2019s own.Response: We thank the reviewer for his/her careful review. Since we added a heterogeneous graph model graph, the original figure 3 is now figure 4. The axes on figure 4 are now appropriately labeled.________________________________________16. It would be useful to know how the proposed model performs with other similar node-local measures such as selectivity measure. It might be useful as a basis for future work.Response: We thank the reviewer for his/her constructive suggestion. This suggestion makes us realize that our exploration of sentence centrality needs to go further. We have written this suggestion into future work. Thanks again for the constructive suggestion.________________________________________17. Introduction contains abbreviation \u201cEDT\u201d which is never elaborated or mentioned again. This seems like a very minor typographical error, and presumably was meant to say \"EDS\". In the same vein, \u201cAblation Study\u201d section contains sentence \u201cThe results show that the experimental performance on ROUGE-L, ROUGE-2, ROUGE-L\u201d. Is the first ROUGE-L in line 244 ROUGE-1?Response: We appreciate the reviewer for his/her careful review. \u201cEDT\u201d is a typographical error; we have corrected it in line 41. The first \u201cROUGE-L\u201d is corrected to \u201cROUGE-1\u201d in line 306.________________________________________18. On several places there is a blank after comma missing.Response: We appreciate the reviewer for his/her careful review. We checked our paper carefully and added a blank for commas.We again thank the reviewer for taking the time to review our article. From the review comments, we can feel the rigorous attitude of the reviewers to academics. There is no doubt that the comments of the reviewers make our manuscript more rigorous and clearer. ________________________________________Reviewer #2: ### Overview and general recommendation The paper addresses the problem of extractive document summarization. It uses sentence centrality information to enhance sentence representation. This information should reflect the sentence-document relationship and the sentence position information as well. The sentence representation is enhanced in two ways: one is embedded directly into the sentence representation output and the other updates the sentence representation indirectly via a graph attention network. Because of advances in abstractive summarization, the task of extractive document summarization is probably no longer very challenging or interesting for the research field. The paper provides a good method for comparing the impact of sentence position vs. sentence centrality. However, one of the conclusions is not well supported by the experimental results, but overall, this is a very well-written paper with sound methodology, ablations studies, and a well-formulated research question. ### Major comments - According to Table 3, the results of different sentence information don't support the conclusion that sentence centrality is a better choice than sentence position. Given that ROUGE is a poor metric, the difference of approximately 0.1 is insignificant.Response: We thank the reviewer for pointing out potential limitation in our study. We agree with the reviewer that the superiority of sentence centrality cannot be demonstrated only from the ROUGE scores.In the extractive summarization task, position information is usually used to enhance sentence representation. Although doing so will improve model extraction performance significantly, it will cause sentence-leading bias, especially in news datasets. We present this phenomenon in lines 23-30.we replaced the sentence position information with sentence centrality to reduce sentence-leading bias without causing model performance degradation, which can be seen in the figure 5 and table 3. We added experiments in the news dataset XSum. The experimental results show that the replacement of sentence position information by sentence centrality will not cause model performance degradation.Before reaching the conclusion \"sentence centrality is a better choice than sentence position\", we added a description of the advantages of sentence centrality in reducing sentence leading bias, discussed in the Discussion section.Combining the advantages of sentence centrality in reducing sentence leading bias and experimental results in table 3, we can conclude that sentence centrality information has certain advantages over sentence position information the extractive summarization task.We are grateful to the reviewer for his/her constructive comments, which made our logic more rigorous and greatly improved the quality of our paper. ________________________________________- More ideas for future research could be added to the conclusion.Response: We thank the reviewer for reminding us to expand our future work. We add more ideas in future research, including exploring whether sentence centrality is also effective in other tasks, etc., which are presented in lines 374-381. ________________________________________- The related research section could be expanded as well, e.g., adding a section on sentence embeddings, since the paper works on that level of representation. Response: We thank the reviewer for his/her constructive suggestion. Adding a section on sentence embeddings will make our article more rigorous. We have expanded our related work in lines 113-131.________________________________________- Some typos should be fixed, and some sentences could be rewritten to make them. Take a look at minor remarksResponse: We thank the reviewer for his/her careful review. We have carefully checked our article and corrected errors. We present the modification details in the ### Minor comments section.________________________________________- There is no link to the code repository.Response: We are pleased that the reviewer is interested in our work.https://github.com/GongShuai8210/SCES.The code is released at ________________________________________### Minor comments - Typo in the caption of Fig. 1, \u201cdocumentt\u201d - Line 176: \u201cusing\u201d \u2192 \u201cuse\u201d - Table 3: \u201csummarizers\u201d \u2192 \u201csummarizer\u201d - Fig. 2: the first sentence should be corrected - Line 244: the first \u201cROUGE-L\u201d should be \u201cROUGE-1\u201dResponse: We thank the reviewer for taking the time to review our article carefully. We have corrected the typo now.-Typo in the caption of Fig. 1, \u201cdocumentt\u201d \u2192 \u201cdocument\u201d- Line 213: \u201cusing\u201d has been corrected to \u201cuse\u201d - Table 3: \u201csummarizers\u201d has been corrected to \u201csummarizer\u201d - Fig. 2: the first sentence has been corrected to \u201cEmbSC_i is the centrality embedding of sentence s_i, which is directly embedded in the sentence representation generated by BERT\u201d. - Line 306: the first \u201cROUGE-L\u201d has been corrected to \u201cROUGE-1\u201d.We again thank the reviewer for his/her careful review and constructive suggestions. There is no doubt that the reviewer's suggestions improve the quality of our article.AttachmentResponse to Reviewers.pdfSubmitted filename: Click here for additional data file. 27 Apr 2022Improving Extractive Document Summarization with Sentence CentralityPONE-D-22-05139R1Dear Dr. Gong,We\u2019re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.Within one week, you\u2019ll receive an e-mail detailing the required amendments. When these have been addressed, you\u2019ll receive a formal acceptance letter and your manuscript will be scheduled for publication.http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at onepress@plos.org.If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they\u2019ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact Kind regards,Sanda Martin\u010di\u0107-Ip\u0161i\u0107, PhDAcademic EditorPLOS ONEAdditional Editor Comments :I have reviewed the revised manuscript, responses to reviewer comments, and availability of data and SW. The authors have addressed all reviewer comments and improved the quality of the manuscript. I am pleased to report that the current manuscript revision adequately addresses all issues and meets the required PlosOnNE criteria.Reviewers' comments: 14 Jul 2022PONE-D-22-05139R1 Improving Extractive Document Summarization with Sentence Centrality Dear Dr. Zhu:I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department. onepress@plos.org.If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact plosone@plos.org. If we can help with anything else, please email us at Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staffon behalf ofDr. Sanda Martin\u010di\u0107-Ip\u0161i\u0107 Academic EditorPLOS ONE"} +{"text": "We report the coding-complete genome sequences of 25 severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) sublineage B.1.1.529 Omicron strains obtained from Bangladeshi individuals in samples collected between December 2021 and January 2022. Genomic data were generated by Nanopore sequencing using the amplicon sequencing approach developed by the ARTIC Network. Coronaviridae, genus Betacoronavirus) is a positive-sense single-stranded RNA virus (Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) (family NA virus . The mosNA virus .As part of the ongoing SARS-CoV-2 genomic surveillance (protocol IEDCR/IRB/2020/11) by the Institute of Epidemiology, Disease Control, and Research (IEDCR), Bangladesh, two specimens were obtained from individuals who had recently visited Africa and had reported coronavirus disease 2019 (COVID-19) symptoms on return to Bangladesh. These specimens were found to be positive for the SARS-CoV-2 nucleocapsid (N) gene but negative for the S gene by TaqPath COVID-19 Combo reverse transcription (RT)-PCR . Seventeen more SARS-CoV-2-positive specimens from the countrywide SARS-CoV-2 surveillance showed similar results. These specimens were further screened with the TaqMan SARS-CoV-2 mutation panel (Applied Biosystems), which indicated the presence of S:N501Y, one of the signature mutations of the SARS-CoV-2 Omicron variant. This S:N501Y mutation is not present in the S gene of the Delta variant, which was the predominant strain circulating in Bangladesh in the last few months of 2021. Overall, a total of 25 specimens were used as input for genome sequencing using the Oxford Nanopore Technologies sequencing platform. The detailed information for all 25 individuals is presented in https://artic.network/ncov-2019/ncov2019-bioinformatics-sop.html). In total, 4,324,431 reads were obtained . Compared to the reference Wuhan Hu-1 genome (GenBank accession number NC_045512.2), the signature amino acid alterations in the spike protein matching the genetic markers of sublineages B1.1.529.1 and B1.1.529.2 were identified. Among the 25 sequences, Pangolin (github.com/cov-lineages/pangolin) assigned 19 sequences to lineage B.1.1.529.1 (BA.1), and six strains were found to be lineage B.1.1.529.2 (BA.2) . Sequencing libraries were prepared using the multiplex PCR amplicon sequencing approach developed by the ARTIC Network , 5. Libr2 (BA.2) . These sThe data from this study can be found under GISAID accession numbers EPI_ISL_7404462, EPI_ISL_7404463, EPI_ISL_8146774, EPI_ISL_8414987, EPI_ISL_8146772, EPI_ISL_8146773, EPI_ISL_8414988, EPI_ISL_8096971, EPI_ISL_8414989, EPI_ISL_8414990, EPI_ISL_8215676, EPI_ISL_8215677, EPI_ISL_8415001, EPI_ISL_8215678, EPI_ISL_8414993, EPI_ISL_8415003, EPI_ISL_8415004, EPI_ISL_8414994, EPI_ISL_8414995, EPI_ISL_9456595, EPI_ISL_9456604, EPI_ISL_9456606, EPI_ISL_9456607, EPI_ISL_9456620, and EPI_ISL_9456621. The Sequence Read Archive (SRA) and GenBank accession numbers are listed in"} +{"text": "Background: To elucidate the potential biological function of hsa_circ_0062270 in the malignant process of melanoma and its potential target.Methods: Quantitative real-time polymerase chain reaction (qRT-PCR) was conducted to examine relative level of hsa_circ_0062270 in melanoma tissues and normal skin tissues. The diagnostic and prognostic potentials of hsa_circ_0062270 in melanoma were evaluated. The regulatory effect of hsa_circ_0062270 on the expression of linear transcript Cell division cycle protein 45 (CDC45) was also examined.Results: Hsa_circ_0062270 was up-regulated in melanoma samples and cell lines, which displayed certain diagnostic and prognostic potentials in melanoma. Inhibition of hsa_circ_0062270 attenuated the proliferative, migratory and invasive functions. Hsa_circ_0062270 could stabilize the expression of linear transcript CDC45, and thus participated in the malignant process of melanoma.Conclusion: Hsa_circ_0062270 promotes proliferative, migratory and invasive functions of melanoma cells via stabilizing the linear transcript CDC45. Hsa_circ_0062270 can be used to diagnosis and treatment of melanoma. Melanoma is a highly malignant skin cancer derived from melanocytes. Melanoma accounts for more than 70% of skin cancer deaths . More seCircRNAs are novel noncoding RNAs to be widely analyzed. They are extensively involved in various fields of life sciences . Since cHsa_circ_0062270 is located on chromosome 22: 19496052-19502571, and its gene symbol is CDC45. Evidence has showed that hsa_circ_0062270 is obviously up-regulated in melanoma . A previThe normal skin tissues and melanoma tissues of 50 patients with melanoma in our hospital were selected. The ethics committee of The First People\u2019s Hospital of Lianyungang approved our study. Signed written informed consents were obtained from all participants before the study.2, 37\u00b0C. Cell transfection was performed using Lipofectamine 3,000 as per the protocols. Cell proliferation was determined by EdU .Melanoma cells and normal human epidermal melanocytes (NHEM) were provided by Cell Bank of Type Culture Collection . Cells were cultivated in DMEM containing 10% fetal bovine serum (FBS), 100 U/mL penicillin and 100\u00a0\u03bcg/ml streptomycin at 5% CO\u2212\u0394\u0394Ct. Primers used were shown below: hsa_circ_0062270: Forward: 5\u2032-AGG\u200bATG\u200bGCT\u200bCAG\u200bGGA\u200bCAG\u200bAT-3\u2032, reverse: 5\u2032-AGG\u200bCCA\u200bTGG\u200bTAC\u200bAGC\u200bTTG\u200bTC-3\u2032; CDC45: Forward: 5\u2032-TTC\u200bGTG\u200bTCC\u200bGAT\u200bTTC\u200bCGC\u200bAAA-3\u2032, reverse: 5\u2032-TGG\u200bAAC\u200bCAG\u200bCGT\u200bATA\u200bTTG\u200bCAC-3\u2032; GAPDH: Forward: 5\u2032-CGG\u200bAGT\u200bCAA\u200bCGG\u200bATT\u200bTGG\u200bTCG\u200bTAT-3\u2032, reverse: 5\u2032-AGC\u200bCTT\u200bCTC\u200bCAT\u200bGGT\u200bGGT\u200bGAA\u200bGAC-3\u2032: U6: Forward: 5\u2032-GCT\u200bGAG\u200bGTG\u200bACG\u200bGTC\u200bTCA\u200bAA-3\u2032, reverse: 5\u2032-GCC\u200bTCC\u200bCAG\u200bTTT\u200bCAT\u200bGGA\u200bCA-3\u2032.RNAs isolation was done with TRIzol and were then reversely transcribed into cDNAs. U6 and GAPDH were used as the internal controls with the method of 2A375 cells were exposed to Actinomycin D (3\u00a0\u03bcg/ml). They were collected for isolating total RNAs. Expressions of hsa_circ_0062270 and CDC45 were detected by Quantitative real-time polymerase chain reaction. Cellular RNA (4\u00a0mg) was treated either with RNase R (10\u00a0U/\u03bcg) at 37\u00b0C for 30\u00a0min or not, followed by purification using RNeasy MinElute .Cells were seeded into the top chamber and bottom chamber. After 48-h incubation, cells in the bottom were fixed, dyed in crystal violet and captured. Migratory cells were counted in five randomly selected fields per sample. Invasion assay was conducted using transwell chamber precoated with 100\u00a0\u03bcL of Matrigel . In detail, Matrigel was diluted in serum-free medium at 1:3, which was coated on the top of a chamber.p < 0.05.Data were expressed as mean \u00b1 SD (standard deviation) and they were processed using Statistical Product and Service Solutions (SPSS) 20.0 . Prognostic value of hsa_circ_0062270 in melanoma were evaluated by Kaplan-Meier and receiver operating characteristic (ROC) method, respectively. The correlation between hsa_circ_0062270 and CDC45 levels was assessed through Pearson correlation test. A significant difference was set at p = 0.0234) . Hsa_cir 0.0234) . We furt 0.0234) . RNase R 0.0234) . TherefoTo investigate the effects of hsa_circ_0062270 on melanoma cell proliferation, migration and invasion, cells were treated with hsa_circ_0062270 siRNA. Results indicated that transfection of hsa_circ_0062270 siRNA markedly down-regulated hsa_circ_0062270 level in A375 and A2058 cells . Knockdovia mediating expression levels of their linear transcripts. CDC45 was the linear transcript of hsa_circ_0062270 (Then we focused on the potential target of hsa_circ_0062270 in the regulation of phenotypes of melanoma. CircRNAs are involved in pathological process _0062270 and posi_0062270 . Identic_0062270 . Knockdo_0062270 . It is cTo further elucidate the effects of CDC45 on melanoma phenotypes, we established the overexpression models of CDC45. Transfection of overexpressed plasmid of CDC45 effectively up-regulated CDC45 in A375 and A2058 cells . In melaTo uncover the co-regulation of hsa_circ_0062270 and CDC45, cells were co-transfected using si-CDC45 and overexpressed plasmid of hsa_circ_0062270 . OverexpThe incidence of melanoma is not high, covering 4\u20135% of malignant tumors. Family history, multiple atypical moles and dysplastic moles are risk factors that trigger the carcinogenesis of melanoma . In addiCircRNAs, as a type of emerging noncoding RNAs, have been well concerned because of their unique structure and vital functions . They caIn vitro experiment results illustrated that knockdown of hsa_circ_0062270 remarkably suppressed proliferative, migratory and invasive functions of melanoma cells.We previously found that the expression of hsa_circ_0062270 in melanoma was up-regulated. The diagnostic and prognostic potentials of hsa_circ_0062270 in melanoma were verified through depicting ROC and Kaplan-Meier curves, respectively. Recent studies have demonstrated that circRNAs have an important role in disease progression by mediating expressions of their linear transcripts . Throughin vivo role of hsa_circ_0062270 in melanoma is not explored. Secondly, how hsa_circ_0062270 regulates CDC45 remains unclear. Thirdly, other cell phenotypes of melanoma, including apoptosis, epithelial-mesenchymal transition and cell cycle progression affected by hsa_circ_0062270 are not clear.Collectively, our present study was the first attempt to reveal that hsa_circ_0062270 was up-regulated in melanoma specimens and correlated to its prognosis. Hsa_circ_0062270 stimulated malignant process of melanoma by stabilizing its linear transcript CDC45. Our findings provide a new aspect for developing diagnostic and therapeutic strategies for melanoma. Several limitations of our study should be pointed out. First of all, via stabilizing the linear transcript CDC45. These findings provided strong evidence that hsa_circ_0062270 could be a novel promising therapeutic target used to diagnosis and treatment of melanoma.Hsa_circ_0062270 promotes proliferative, migrative and invasive functions in melanoma cells"} +{"text": "Structural annotation of genetic variants in the context of intermolecular interactions and protein stability can shed light onto mechanisms of disease-related phenotypes. Three-dimensional structures of related proteins in complexes with other proteins, nucleic acids, or ligands enrich such functional interpretation, since intermolecular interactions are well conserved in evolution.We present d-StructMAn, a novel computational method that enables structural annotation of local genetic variants, such as single-nucleotide variants and in-frame indels, and implements it in a highly efficient and user-friendly tool provided as a Docker container. Using d-StructMAn, we annotated several very large sets of human genetic variants, including all variants from ClinVar and all amino acid positions in the human proteome. We were able to provide annotation for more than 46% of positions in the human proteome representing over 60% proteins.d-StructMAn is the first of its kind and a highly efficient tool for structural annotation of protein-coding genetic variation in the context of observed and potential intermolecular interactions. d-StructMAn is readily applicable to proteome-scale datasets and can be an instrumental building machine-learning tool for predicting genotype-to-phenotype relationships. Key PointsA novel bioinformatics tool for structural characterization of genetic variants is presented.Single-nucleotide variants and indels are described with respect to intermolecular interactions in homologous protein complexes.An efficient implementation using a Docker container allows for analysis of large whole proteome-scale datasets.In the age of next-generation sequencing, large-scale genetic diversity within populations became apparent. A single human individual of European ancestry carries around 3 million genetic variants , of whicAlthough most of sequence variants have no functional or pathogenic effect, in some cases, even a single mutation can be disease causing , 6. ExpeOf those tools that employ protein 3D structure to derive various predictive features, several put variants of interest in the context of protein\u2013protein interactions and inteAs far as structural annotation of larger variants is concerned, most methods are developed to annotate single amino acid replacements, and to the best of our knowledge, there is currently no structural annotation pipeline that is able to map indel-type genetic variants to protein 3D structures.In this study, we present d-StructMAn, a new improved implementation of our earlier tool StructMAn , shippedhttps://www.uniprot.org/uniprot/?query=human&fil=proteome%3AUP000005640+AND+organism%3A%22Homo+sapiens+%28Human%29+%5B9606%5D%22&sort=score# ). S.K. was supported by the IMPRS-CS graduate student fellowship and DFG project number 430158625. S.K.S. was partially supported by the UdS-HIPS-Tandem Interdisciplinary Graduate School for Drug Research. O.V.K. was supported by the Klaus Faber Foundation.A.G. devised the method and implemented the core functionality. S.K.S. and S.K. assisted with the method development and implemented the container. V.R. and O.V.K. conceived the project. All authors wrote the manuscript.We would like to thank Anne Tolkmitt, Nadezhda Azbukina, Amit Fenn and Olga Tsoy for testing d-StructMAn.giac086_GIGA-D-22-00032_Original_SubmissionClick here for additional data file.giac086_GIGA-D-22-00032_Revision_1Click here for additional data file.giac086_GIGA-D-22-00032_Revision_2Click here for additional data file.giac086_Response_to_Reviewer_Comments_Original_SubmissionClick here for additional data file.giac086_Response_to_Reviewer_Comments_Revision_1Click here for additional data file.giac086_Reviewer_1_Report_Original_SubmissionRoman Laskowski, PhD -- 4/4/2022 ReviewedClick here for additional data file.giac086_Reviewer_1_Report_Revision_1Roman Laskowski, PhD -- 7/14/2022 ReviewedClick here for additional data file.giac086_Reviewer_2_Report_Original_SubmissionEduard Porta Pardo -- 4/5/2022 ReviewedClick here for additional data file.giac086_Reviewer_2_Report_Revision_1Eduard Porta Pardo -- 8/1/2022 ReviewedClick here for additional data file.giac086_Supplemental_FilesClick here for additional data file."} +{"text": "The purpose of this research is to emphasize the importance of mental health and contribute to the overall well-being of humankind by detecting stress. Stress is a state of strain, whether it be mental or physical. It can result from anything that frustrates, incenses, or unnerves you in an event or thinking. Your body\u2019s response to a demand or challenge is stress. Stress affects people on a daily basis. Stress can be regarded as a hidden pandemic. Long-term (chronic) stress results in ongoing activation of the stress response, which wears down the body over time. Symptoms manifest as behavioral, emotional, and physical effects. The most common method involves administering brief self-report questionnaires such as the Perceived Stress Scale. However, self-report questionnaires frequently lack item specificity and validity, and interview-based measures can be time- and money-consuming. In this research, a novel method used to detect human mental stress by processing audio-visual data is proposed. In this paper, the focus is on understanding the use of audio-visual stress identification. Using the cascaded RNN-LSTM strategy, we achieved 91% accuracy on the RAVDESS dataset, classifying eight emotions and eventually stressed and unstressed states. Today, 82 percent of Indians are stressed, as per the Cigna 360 Well-being study . Rising Stress is a condition of mental pressure for individuals facing problems relating to environmental and social well-being which leads to many diseases. It was discovered that academic exams, human relationships, interpersonal difficulties, life transitions, and career choices all contribute to stress. Such stress is commonly associated with psychological, physical, and behavioral issues .According to Lazarus and Folkman (1984), \u201cstress is a mental or physical phenomenon formed through one\u2019s cognitive appraisal of the stimulation and is a result of one\u2019s interaction with the environment\u201d. The existence of stress depends on the existence of the stressor. Feng (1992) and Volpe (2000) defined a stressor as \u201canything that challenges an individual\u2019s adaptability or stimulates an individual\u2019s body or mentality\u201d. Stress can be caused by environmental factors, psychological factors, biological factors, and social factors, as shown in Human stress represents an imbalanced state of an inEmotions are present in almost every decision and moment of our lives. Thus, recognizing emotions awakens interest, since knowing what others feel helps us to interact with them more effectively. Emotions are considered a psychological state . In the It must be considered that emotions are subjective to an individual, i.e., each subject may experience a different emotion in response to the same stimuli. Thus, emotions can be classified into two different models\u2014the discrete model and the dimensional model. The discrete model includes basic emotions such as happiness, sadness, fear, disgust, anger, surprise, and mixed emotions such as motivation , self-awareness , etc. The dimensional model is expressed in terms of two emotions, valence and arousal . The various emotions experienced by a human can be represented through the Plutchik wheel of emotion , as showSeveral researchers have analyzed human stress using basic emotions. It is possible to map emotions with the stress level. Stress can be detected based on emotions obtained from the audio-visual data. Human emotions are expressed in the voice as well as on the face. The emotional state is extracted from the audio-visual data first. Positive emotions such as happiness, joy, love, pride, and pleasure can have a positive effect, such as improving daily work performance, and negative emotions such as anger, terrible, sad and, disgust can have a negative impact on the health of a person. Positive and negative emotions are represented in The valence\u2013arousal space, as illustrated in Our model reached an accuracy of 91% on the \u2018The Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS), outperforming some of the previous solutions evaluated in similar conditions. As far as we know, our study also represents the first attempt to combine speech and facial expressions to recognize the eight emotions in RAVDESS and finally conclude on a stressed or relaxed state.The rest of the paper is organized as follows: By outlining some of the difficulties that these systems encountered, we present earlier automatic stress detection techniques here. We describe the stress-inducing stimuli that were employed, how stress was measured, the signals that were gathered, and the machine learning techniques that were applied in these studies.Stress detection from speech signals has many applications. It is used in psychology to monitor the different stress levels of patients with different stress conditions and provide necessary treatments. The safety and security of a system can be established by monitoring the different stress levels of pilots, deep sea divers and military officials undertaking law enforcement. Stress detection is also useful in speaker identification, deception detection and identification of threatening calls in a few cases of crimes . In ordeKevin Tomba et al. worked oSpeech and facial expression are two natural and effective ways of expressing emotions when human beings communicate with each other. During the last two decades, audio-visual emotion recognition integrating speech and facial expression has attracted extensive attention owing to its promising potential applications in human\u2013computer interaction ,28. HoweG. Giannakakis et al. recordedAudio-visual data were not considered for the stress detection, and only audio signals were used. Moreover, the accuracy of the results can be improved. Although there is much research discussing the recognition and analysis of the six basic emotions, i.e., anger, disgust, fear, happiness, sadness, and surprise, considerably less research has focused on stress and anxiety detection from audio visuals, as these states are considered as complex emotions that are linked to basic emotions . The results of emotion state recognition from audio-visual data can be improved using deep learning techniques, which can be further used to detect stress.Overall, this seems to be an interesting area of research, and the analysis of the existing work would help in carrying out future research. To sum up, despite the fact that other works in the literature also performed multimodal emotion recognition on RAVDESS, such as Wang et al. , who useIn the 1990s and 2000s, the face recognition community was dominated by holistic techniques. Faces are represented using holistic approaches utilising the complete facial region. Many of these approaches function by projecting facial photographs into a low-dimensional space that eliminates unimportant features and variances. PCA is one of the most prominent techniques in this field. Deep neural networks trained with extremely huge datasets have lately supplanted older approaches based on hand-crafted features and typical machine learning techniques. Deep face recognition algorithms, which employ hierarchical design to learn discriminative face representation, have significantly enhanced state-of-the-art performance and spawned a multitude of successful real-world applications. Deep learning employs many processing layers to discover data representations with numerous feature extraction levels ,35.Eckman and Friesen created FACS divides the face into 46 action units (AUs), as shown in Almost any anatomically conceivable facial expression can be coded using FACS, which breaks it down into the specific action units (AUs) that give rise to the expression, as shown in OpenFace is a tool intended for computer vision and machine learning researchers, the affective computing community, and people interested in building interactive applications based on facial behavior analysis. OpenFace is the first toolkit capable of facial landmark detection, head pose estimation, facial action unit recognition, and eye-gaze estimation with available source code for both running and training the models. Specifically, OpenFace can identify AUs 1, 2, 4, 5, 6, 7, 9, 10, 12, 14, 15, 17, 20, 23, 25, 26, 28 and 45.There are two ways to categorize AUs: intensity and presence. Presence indicates whether an AU is visible on the face. On a scale of 1 to 5, intensity indicates the degree of AU intensity (min to max). Both of these scores are presented by OpenFace.These two scores are provided by OpenFace. The output file\u2019s column AU01 c encodes 0 as not present and 1 as present for the presence of AU 1. The output file\u2019s column AU01 r has continuous values in the range of 0 (not present), 1 (present at minimum intensity), and 5 (present at maximum intensity) for the intensity of AU 1.Our proposed stress detection framework includes two systems: a speech emotion recognizer and a face emotion recognizer. The outputs of these subsystems were integrated to identify the dominant emotion and eventually result in a stressed or unstressed state. In the current research, we made a point to highlight a novel method of implementing two different algorithms to function better than any single algorithm working individually. The proposed algorithm not only improves the overall accuracy in determining emotions but also is faster than each individual algorithm, as it uses the advantages of each algorithm and eliminates the disadvantages or time-consuming processes of each of them. Further, the work may seem complicated at the first glance; however, the accuracy improvement in the field of mental stress determination is what we are looking for, and our set objectives for the research work are met through the approach.The Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS) is licensed under CC BY-NA-SC 4.0. The paper by Livingstone SR and Russo FA (2018) described the construction and validation of the dataset.There are 7356 files in the RAVDESS. Each file was rated ten times for emotional validity, intensity, and authenticity. A group of 247 people who were typical untrained adult research participants from North America provided ratings. The second group of 72 people provided test\u2013retest data. Emotional validity, interrater reliability, and test\u2013retest intra-rater reliability were all reported to be high.The dataset included all 7356 RAVDESS files in their entirety . The three modality formats for each of the 24 actors were audio-only , audio-video , and video-only (no sound). Please take note that Actor 18 did not have any song files.A total of 4948 samples were used for this task. Audio files were extracted from video-audio files using the \u201cmp4 to wav\u201d algorithm. The filenames for each of the 7356 RAVDESS files were distinctive. A seven-part numerical identifier comprised the filename . These codes specified the properties of the stimulus:The filename identifiers used are illustrated in Taking the example of the RAVDESS filename 02-01-06-01-02-01-12.mp4:Video-only (02)Speech (01)Fearful (06)Normal intensity (01)Statement \u201cdogs\u201d (02)1st Repetition (01)12th Actor (12)Female, as the actor ID number is even.ANN and/or CNN have been presented before in the literature, and an accuracy of around 80% has been reported for them. In our literature review, we did not find any individual algorithm which would improve the accuracy of prediction beyond 90%. So, we needed a different approach wherein we combined two relatively less processor-heavy algorithms to work on and improve the accuracy and simultaneously work at a faster rate. However, as rightly pointed out by the reviewer, in our continued plan for our research work, we will make a point to work on ANN- and CNN-based algorithms to either present a comparative analysis or to cascade them as per our intended method to verify their performance for the said cause. Recurrent neural networks (RNNs) have been successfully applied to sequence learning issues such as action identification, scene labeling, and language processing. An RNN has a recurrent connection, unlike feed-forward networks such as convolutional neural networks (CNNs), where the previous hidden state is an input to the subsequent state. An enhanced RNN, or sequential network, called a long short-term memory network, allows information to endure. It is capable of resolving the RNN\u2019s vanishing gradient issue. Persistent memory is achieved via a recurrent neural network or RNN. Let us imagine that when reading a book or viewing a movie, you are aware of what happened in the preceding scene or chapter. RNNs function similarly; they retain the knowledge from the past and apply it to process the data at hand. Due to their inability to remember long-term dependencies, RNNs have this drawback. Long-term dependency issues are specifically avoided when designing LSTMs.In our case, RNN is used to classify data of facial landmark position with respect to time for visual data analysis and to classify the pitch of different frequencies of the audio signal with respect to time to determine the emotions.Speech and facial expressions are used to detect users\u2019 emotional states. These modalities are combined by employing two independent models connected by a novel approach. By merging the information from aural and visual modalities, audio-visual emotion identification is vital for the human\u2013machine interaction system. We propose a cascaded RNN-LSTM approach for audio-visual emotion recognition through correlation analysis. The emotions will finally be categorized as a stressed mental state or a relaxed mental state. We use the RAVDESS dataset for the verification of the proposed algorithm.The flowgraph for stress recognition using speech signals is shown in Our deep learning model contains two individual input streams, i.e., the audio network processing audio signals with the cascaded RNN-LSTM model, and the visual network processing visual data with the hybrid RNN-LSTM model. The flowchart for the algorithm is in In the proposed algorithm, audio files are extracted from the video files and processed separately. Librosa is used to process audio files while OpenFace is used to process video files. Overall, 66% of samples are used for training purposes, while the rest are used for testing the algorithm. In the algorithm, RNN and LSTM work parallelly to improve the speed of the feature extraction process. Audio signals need 20 neurons in the LSTM network while video signals need 40 neurons due to their signal processing requirements. MFCC is used as a filter for feature extraction. Dropout layers are used to prevent data from overfitting. Max pooling with convolution creates the final 8 required labels from the features. A dense sigmoid function is used for the final classification of the output with 10 neurons each. The separate outputs of both audio and video files are compared on a common platform to improve the accuracy by matching the missing labels. The following emotions are predicted in this model: \u201cneutral\u201d: \u201c01\u201d, \u201ccalm\u201d: \u201c02\u201d, \u201chappy\u201d: \u201c03\u201d, \u201csad\u201d: \u201c04\u201d, \u201cangry\u201d: \u201c05\u201d, \u201cfearful\u201d: \u201c06\u201d, \u201cdisgust\u201d: \u201c07\u201d, \u201csurprised\u201d: \u201c08\u201d. Finally, 8 emotions are classified into 2 mental states\u2014stressed and relaxed. First of all, we chose the method of comparing both audio and video files to avoid any misrepresentation of emotions due to the use of only one kind of file. In a scenario where the classification of both files is different, the average sum of scores of each signal will determine the probability of the inclination of the signals to a particular emotion. However, such a scenario has not yet occurred in our work, and hence the algorithm has not yet been validated.We used the Jupyter interface to run the program. LibROSA, a python package, was used for music and audio analysis, while the OpenFace package was used for facial motion tracking.We plotted the signal from a random file with audio and facial recognition separated as shown in Two facial recognition examples are illustrated in NumPy array was created for extracting Mel-frequency cepstral coefficients (MFCCs), while the classes for prediction were extracted from the name of the file.To apply the cascaded RNN-LSTM method effectively, we need to expand the dimensions of our array, adding a third one using the NumPy \u201cexpand_dims\u201d feature.Layer (type) Output Shape Param #=================================================================conv1d_1 (Conv1D) 768_________________________________________________________________activation_1 (Activation) 0_________________________________________________________________dropout_1 (Dropout) 0.1_________________________________________________________________max_pooling1d_1 0_________________________________________________________________conv1d_2 (Conv1D) 82,048_________________________________________________________________activation_2 (Activation) 0_______________________________________________________________dropout_2 (Dropout) 0.5_________________________________________________________________flatten_1 (Flatten) 0_________________________________________________________________dense_1 (Dense) 6410_________________________________________________________________activation_3 (Activation) 0=================================================================Total params: 89,226Trainable params: 89,226Non-trainable params: 0The model loss of epochs based on training and test data is shown in the To understand the errors of the top solution, we extracted the confusion matrix of the SVM, LSTM, and RNN-LSTM approaches with an accuracy of 76%, 82%, and 91%, respectively. The confusion matrix displayed in the The proposed algorithm is compared with the conventional ones and the performance analysis is presented the Final Output:1633/1633 [==============================]\u20140s 125s/stepAccuracy: 91.00%The existing work was focused on either audio or facial images. In audio-visual data, the separate output of audio and video files was compared on a common platform to improve accuracy by matching the missing labels. In order to enhance the accuracy further, we increased the dimensions of the dataset, as LSTM works better with more data. The accuracy for prediction for the proposed algorithm for the RAVDESS dataset is 91%.Only image-based classification may give polarized results in cases where the image under processing lacks the overall gesture being conveyed. Moreover, using audio and visual signals will help to improve the emotion classification accuracy, which is needed to determine whether the algorithm further needs to be fully developed for the medical determination of mental stress. Although we used well-established packages for our work, we made several changes to the algorithm to make it work and provide novelty. The changes in the algorithm include cascading or the parallel operation of algorithms , the addition of dropout layers to adjust the blank values and to avoid overfitting of the data, and processing of both audio and video files to compare and improve classification accuracy. We would like to state that this method of implementing the algorithm has never been reported in the literature before.Detecting stress is essential before it turns chronic and leads to health issues. The current paper suggests that audio-visual data have the potential to detect stress. In our society, stress is becoming a major concern, and modern employment challenges such as heavy workloads and the need to adjust to ongoing change only make the situation worse. In addition to severe financial losses in businesses, people are experiencing health issues related to excessive amounts of stress. Therefore, it is crucial to regularly check your stress levels to detect stress in its preliminary stages and prevent harmful long-term consequences. The necessity for individuals to handle chronic stress gave rise to the concept of stress detection. The accuracy of the cascaded RNN-LSTM approach for the RAVDESS dataset is 91%. The obtained results are 15\u201320% better than those of other conventional algorithms. The proposed method is an excellent starting point to work towards mental health by detecting stress and improving one\u2019s quality of life.The evaluation of the test results showed that the successful detection of stress is achieved, although further improvements and extensions can be made. The implementation of this system can be improved by using more efficient data structures and software to reduce delays and achieve real-time requirements."} +{"text": "Although wildfires are an important ecological process in forested regions worldwide, they can cause significant economic damage and frequently create widespread health impacts. We propose a network optimization approach to plan wildfire fuel treatments that minimize the risk of fire spread in forested landscapes under an upper bound for total treated area. We used simulation modeling to estimate the probability of fire spread between pairs of forest sites and formulated a modified Critical Node Detection (CND) model that uses these estimated probabilities to find a pattern of fuel reduction treatments that minimizes the likely spread of fires across a landscape. We also present a problem formulation that includes control of the size and spatial contiguity of fuel treatments. We demonstrate the approach with a case study in Kootenay National Park, British Columbia, Canada, where we investigated prescribed burn options for reducing the risk of wildfire spread in the park area. Our results provide new insights into cost-effective planning to mitigate wildfire risk in forest landscapes. The approach should be applicable to other ecosystems with frequent wildfires. Wildfires, while being a natural ecosystem process in many biomes, can pose significant economic and social threat to human communities in forested regions \u20133. Land Preventive fuel treatments, such as prescribed burns or strategic thinning of forest stands, are intended to decrease the probability of fire spread and reduce fire severity and, consequently, the damage to human infrastructure. While there is a consensus that a substantial reduction of flammable biomass will reduce fire spread and severity, factors such as the location, size, maintenance, and use in fire operations may undermine fuel-treatment effectiveness ,6. If efOptimization has been widely used to support decisions about fire prevention and suppression \u201320. SeveSeveral of the proposed models minimized connectivity between forest patches with high wildfire risk as a way to reduce fire spread potential in a landscape . Minas aIn this paper, we employ a directional metric that quantifies the probability of fires spreading between a pair of locations in a landscape to solve a fuel treatment problem: allocating a set of prescribed burns to minimize the chance that wildfires will spread through the landscape, subject to an upper bound on the total treated area of prescribed burning or similar fuel reduction treatments (henceforth referred to as \u201cprescribed burning\u201d). For each pair of forest patches, we estimate the probability that a fire ignited in one patch will spread to another. We incorporate this metric into a modified Critical Node Detection (CND) problem \u201339, whicA forest landscape can be thought of as a connected network of flammable patches (nodes), where the connecting arcs (edges) depict possible vectors of fire spread between adjacent patches. To minimize the possibility of fires spreading widely across the area, the manager allocates a set of treatments (prescribed burns) among the nodes. Treating a node helps reduce fire intensity to the pA popular strategy for solving this problem is to reduce the connectivity between nodes with flammable fuels in the landscape network. This strategy can be implemented by solving a CND problem, which finds the key nodes in a network whose removal maximally degrades the connectivity of the network according to a chosen metric \u201338,43\u201345G = be a graph with a set of N nodes (vertices) and a set of edges E, E \u2282 N\u00d7N and there is a path connecting i and j, i.e., when nodes i and j are in the same connected component . The graph obtained after a removal of R critical nodes is a subgraph(s) of G composed of the set of remaining nodes, N \\ R after removal of al nodes ,37. BeloO(|N|2) constraints and is more efficient than the original CND problem formulation with triangular inequalities . Constraints . Set \u0398i denotes adjacent nodes j (including node 0) that can pass flow to node i, while set \u0398i+ denotes adjacent nodes k that can receive flow from i. Constraint (ij is not selected and constraint (ij if no flow occurs between nodes i and j in step t. Constraint (i from other nodes (including node 0) over T steps comes through no more than one arc. This guarantees no overlap between the node selections in different steps t. Constraint and zero otherwise. Constraint ensures straints \u201312 ensurnstraint ensures nstraint ensures nstraint specifienstraint specifienstraint ensures T planned burns. The time to find a feasible solution can be reduced by replacing objective t\u2019 includstraints and 7) t\u2019 inclut\u2019 to account for a cumulative reduction of fire spread potential after each period. This makes problem 3 combinatorically harder than problem 2. To reduce the solving time, we used the problem 2 solutions to initialize problem 3. We first solved problem 3 with constraint (B), i.e.:\u03c7i denotes the values of decision variable xi in problem 2 solution. Under constraint from i to j because the spread probability value pij only defines the likelihood that a fire which is ignited in location i will spread to location j and does not require specification of how the fire might spread from i to j. Evaluating the presence of a path connecting nodes i and j is handled by the CND model constraints ..i,j in tpij between node pairs ij in landscape N. Fire simulation models generate stochastic ignition events and plausible perimeters of fires spreading from the ignition locations from the solution with T = 3 as a warm start, and so on until we solved the model for a desired number of steps. We then used the set of decision variables from the last solution to warm start the full problem. Problem 3 was initialized from problem 2 solutions in similar fashion. The model was run for 72 hours or until reaching an optimality gap of 0.5%, whichever came first. The model was composed in the General Algebraic Modeling System Reviewers' comments:Reviewer's Responses to QuestionsComments to the Author1. Is the manuscript technically sound, and do the data support the conclusions?The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1:\u00a0YesReviewer #2:\u00a0Yes**********2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1:\u00a0YesReviewer #2:\u00a0Yes**********3. Have the authors made all data underlying the findings in their manuscript fully available?PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data\u2014e.g. participant privacy or use of data from a third party\u2014those must be specified.The Reviewer #1:\u00a0YesReviewer #2:\u00a0Yes**********4. Is the manuscript presented in an intelligible fashion and written in standard English?PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.Reviewer #1:\u00a0YesReviewer #2:\u00a0Yes**********5. Review Comments to the AuthorPlease use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. Reviewer #1:\u00a0General comments:Your work is interesting and, for the most part, well-presented. I find no major flaws in your analysis that warrant revision to the methods, but there are a few sections of the paper that could be improved to clarify your assumptions and methods. I\u2019ll highlight my main critiques here, followed by specific recommendations.First, I suggest you revisit the abstract to make sure it aligns with what you accomplished. Specific notes follow.Fire behavior is a general term that includes spread but also intensity, duration, and type. In most cases, I think it would be clearer to say something like burn probability or spread likelihood instead in this paper.Does the problem 3 formulation with time add value to your analysis? As you point out, you do not actually remodel the fire spread probabilities. My personal view is that there is little value planning far in advance in fire-prone landscapes because stochastic wildfire activity will alter the priorities for subsequent periods more than the treatments. If you are interested, other researchers have examined the effects of uncertainty in fire occurrence using two stage models solved with backwards induction:Konoshima M, Montgomery CA, Albers HJ, Arthur JL (2008) Spatial-endogenous fire risk and efficient fuel management and timber harvest. Land Economics 84(3), 449-468.Konoshima M, Albers HJ, Montgomery CA, Arthur JL (2010) Optimal spatial patterns of fuel management and timber harvest with fire risk. Canadian Journal of Forest Research 40, 95-108.These should probably be cited in the discussion when discussing multiple planning periods. I\u2019m surprised you advocate for problem 3 formulation in the discussion given that you say it produced similar results as problem 2 but with added complexity. Why?If you had more space, I would recommend you compare your method to something simple and widely used, such as Finney\u2019s treatment optimization method or specific patterns of fuel treatments. When you start to describe the contexts that your problem 3 can be applied in , I start to wonder if you really diverge from a plan that interrupts the major spread paths of an anticipated problem fire scenario.Finney MA (2004) Chapter 9, Landscape fi re simulation and fuel treatment optimization. In: J.L. Hayes, A.A. Ager, J.R. Barbour, (tech. eds). Methods for integrated modeling of landscape change: Interior Northwest Landscape Analysis System. PNW-GTR-610. p 117-131.Finney MA (2001) Design of regular landscape fuel treatment patterns for modifying fire growth and behavior. Forest Science 47(2), 219-228.Specific comments:Lines 40-42: It is fair to point out that resources to mitigate wildfire risk are limited, and it is therefore important to prioritize, but there are many tools to assist forest managers in planning fuel treatments. I suggest dropping the focus on limited tools here. Whether the tools are used by managers is another issue.Line 42: I suggest replacing \u201cinterdiction\u201d with common language since you felt the need to define it in the paper.Line 43: Did you really cover consequences in this work? I didn\u2019t see an effects analysis here.Lines 44-45: How about \u201cWe used simulation modeling to estimate the likelihood of fire spread between forest network nodes and we\u2026\u201dLine 46: fuel treatment ALLOCATION problemLines 64-68: Yes. Costs tend to rise with fire sizes, but rising suppression costs are often attributed to the expansion of human values into wildlands. It would be less controversial if you focused this statement only on the challenge of suppressing fires in rugged landscapes.Lines 69-70: I think it is appropriate to at least mention that not all forestry and fire scientists agree that fuel treatments will reduce fire spread. Your Agee reference on shaded fuelbreaks includes discussion of this. Agee and Skinner (2005) and Reinhardt et al. (2008) argue strongly that most fuel treatments are, or should be, aimed at reducing fire intensity and severity.Lines 70-72: Is the saving costs statement supported by these citations? It was a modeling study, but the Thompson et al. (2013) \u2018Quantifying the Potential Impacts of Fuel Treatments on Wildfire Suppression Costs\u2019 article provides the clearest estimates of how fuel treatments could reduce costs via their effects on fire sizes.Lines 76-77: Again, I think it is appropriate to acknowledge that some of these models were aimed at reducing the severity of effects instead of large fire spread.Lines 115-116: I would drop the subscripts here and save them for the methods.Lines 127-130: Again, I think you should temper this statement to make it clear that it is more of an assumption supported by rules of thumb than a clear conclusion of the research. It is also prudent to acknowledge that fuel treatments do not generally achieve 100% protection, especially in the case of extreme weather. The Kalies and Kent (2016) review on fuel treatment effectiveness may be worth mentioning here.Line 134: You already introduced the CND abbreviation.Line 156-157: Would it not be simpler to introduce the model as area limited since you don\u2019t account for variable costs? I see the future value of accounting for this, but it adds slight confusion to the paper. For example, you describe the model as having an upper bound for Rx fire area in the abstract and introduction.Lines 183-185: And fire suppression?Line 197: \u201cdepict well\u201d to \u201crepresent\u201d?Line 234: \u201cConsecutively\u201d or \u201cconsequently\u201d?Lines 313-314: I do not think it is a good idea to use T and t for both steps in problem 2 and time periods in problem 3. I suggest changing a different letter for the steps in problem 2 to avoid confusion.Lines 405-407: Rephrase for clarity.Lines 437-442: You should clarify exactly how the information you mention was used. Were treatments limited to a particular vegetation type? Did you use the ignition probability, but not spread probability components of Burn P3? Is this later what you refer to as prioritizing on ignitions?Lines 443-452: This is where it would help to know the difference between T steps and T time periods.Lines 464-474: As noted in my general comments, this is an important enough change in methods that you should clarify which results it applies to and describe it fully in the methods section instead of supplementary material referenced from the results.Lines 488-489: \u201cignoring spatial contiguity rules\u201d? or \u201cignoring the simulated connectivity measures\u201d?Line 494: Suggest changing fire behavior to fire spread.Lines 520-525: Why is this scenario suddenly popping up in the results? This should be introduced earlier with justification for what it tells you. Prescribed fires likely reduce ignition risk for a short period after treatment, but this will not last long as fuels reaccumulate. Reducing ignitions with rules and enforcement may require different methods in some landscapes.Lines 579-582: I\u2019m confused about what scenarios are being compared here.Lines 598-611: I\u2019m wondering how much the small/large fire size tradeoff that is important at this site pertains to the use of probability vs. binary fireshed weighting versus the specific pattern of fire sizes and occurrence on your landscape? What do you think you would find on a landscape with high probability of fire spread from less frequent but large fires?Lines 615-619: Did the approximation you made to get at spread paths within fires really \u201caddress\u201d the problem? I don\u2019t have a brilliant solution to do better without complicating the simulation. I would be tempering my language here to reflect that some approximations were made to prototype a model framework.Line 625: With a shortest path approximation\u2026Figure 3: You should probably include a scale bar and north arrow in the study site panel.Reviewer #2:\u00a0GENERAL COMMENTThe manuscript entitled \u201cDetecting critical nodes in forest landscape networks to reduce wildfire spread\u201d aims to propose a modeling approach to optimize preventive fuel treatments for minimizing the wildfire spread likelihood and consequences. The methodological approach presented in this manuscript was tested in a study area located in Kootenay National Park, British Columbia (Canada).Overall, I do think the work is interesting and has the potential to provide insights and methods for future studies or analysis that would investigate the potential effects of spatial locations of fuel treatments on wildfire spread, while considering the maximization of the benefit/cost ratio. The present study could also provide relevant information for policy makers and stakeholders to adapt or improve future management plans and strategies in Canada as well as in other areas.https://esajournals.onlinelibrary.wiley.com/doi/abs/10.1890/03-5210; Davies et al. 2015, http://dx.doi.org/10.1071/WF15055; Salis et al. 2018, https://www.sciencedirect.com/science/article/pii/S0301479718301191; Prichard et al. 2020 https://pubmed.ncbi.nlm.nih.gov/32086976/ doi:10.3390/f6062148). This is a shortcoming that can be improved.The Introduction section is well written and provides a generally good overview of the works that investigated this topic. I only have a remark. The authors limit the Introduction section focusing on previous works carried out in forest areas and using prescribed burnings, while the applicability of the approach they propose could be expanded also to semiarid or rural areas, as well as to fuel management strategies different than prescribed fires . Even if manuscripts published in Plos One can be any length, there is need to reduce this part and omit some redundant sentences. Some specific points to improve this section will be provided in later rows.The Results are in my opinion fine. I would suggest making some improvements in Figures 3 and 4.The Discussion section needs to be improved, as the comparison between the results and approach presented in this work are not compared with those obtained in other similar works.In the end, the manuscript is overall of good quality and well-written, but should be improved by reducing the length of the text and improving the quality of the discussion sections.SPECIFIC COMMENTShttps://esajournals.onlinelibrary.wiley.com/doi/abs/10.1890/03-5210; Davies et al. 2015, http://dx.doi.org/10.1071/WF15055; Salis et al. 2018, https://www.sciencedirect.com/science/article/pii/S0301479718301191; Prichard et al. 2020 https://pubmed.ncbi.nlm.nih.gov/32086976/ doi:10.3390/f6062148).Introduction: The Introduction section seem too much focused on works carried out in forest areas and application of prescribed burnings, while the approach proposed in this study could be expanded also to semiarid or rural areas, as well as to fuel management strategies different than prescribed fires L399-402: Considering that no information is provided on crown fire and spot fire settings, I suppose the authors applied Burn-P3 model to simulate surface fire spread. In case the authors simulated crown fires and spot fires, I would recommend including more details on this.L408-410: Please include a table, in the Supplementary data, to summarize the main input data used for fire simulations L435-439: Problems 1-4 related to the critical nodes detection (CND) were introduced in the first equations, several pages before this part. I would recommend helping readers and clarifying that these problems refer to the first equations and the detection of critical nodes.L533: Starting a sentence with \u201cRecall that\u201d might be inappropriate, please checkL613-666: The Discussion section summarizes relatively well the principles and generalizations from results as well as the significance of results. On the other hand, it does not discuss the results and methods presented in this work in relation to those of others. This is a limitation, so I recommend improving the Discussion in this sense.Figures 3-4: Please include the scale bar.**********what does this mean?). If published, this will include your full peer review and any attached files.6. PLOS authors have the option to publish the peer review history of their article digital diagnostic tool,\u00a0 5 Aug 2021PONE-D-21-15640 \u201cDETECTING CRITICAL NODES IN FOREST LANDSCAPE NETWORKS TO REDUCE WILDFIRE SPREAD\u201d \u2013 A Reply to Reviewers\u2019 Comments.Academic Editor\u2019s comments:Both reviewers have indicated that the manuscript requires revisions to the Discussion, specifically that you should compare your method against existing methods to show the improvements or disadvantages.We have added text comparing our technique with some previously proposed treatment methods.One common strategy is to treat sites with the highest likelihoods of ignition . However, this strategy does not optimally reduce the risk of spread of escaped fires nor does it address the uncertainty of determining the sites with the highest fire ignition potential. By comparison, our probabilistic fireshed strategy compartmentalizes regions with high ignition potential, thus providing a hedge against the possibility of fires escaping to spread elsewhere.Several other fuel treatment strategies have used site-specific priority weights. Minas et al. (2015) linked site treatment and deployment resources to minimize the number of sites covered by these activities. Each site was assigned a weight by ignition probability and the value under risk if a fire originating in that site is not contained by the initial response. Rachmawati et al. (2015) focused on rapid fuel accumulation after treatment and used site-based combinations of vegetation type and age since fire to find an optimal multi-period sequence of fuel treatments. Wei (2012) applied optimization of fuel treatment at a very small scale (7x7 rows) without embedding a fire simulation model but examined the geometry of the treated areas. Finney et al. proposed the assessment of fuel treatments by dividing the landscape into rectangular strips oriented normal to the predominant wind directions. Then, fire growth was simulated, starting with the strip farthest upwind, to identify key fire spread routes and their intersections with the potential treatment areas. The process was repeated after moving each strip in the direction of the wind to impact downwind travel routes and subsequent treatment areas. This method finds fuel treatment configurations for a set of likely fire spread routes but overlooks the combinatorial aspects when allocating multiple treatments under a limited budget. Another network-based approach aimed to minimize the connectivity between sites with high fuel loads . Pais et al. (2021) used a network flow model to control the spatial contiguity of the treated area and prioritized treatments using a site-based fire risk metric . The DPV metric assigns treatment priority ranks to sites by modeling fire propagation through a forest landscape as a tree graph and accounts for the potential of a fire ignited at a given site to burn other sites. In contrast, our model makes decisions using the fire spread probabilities between pairs of locations, which enables control based on the presence of possible fire spread paths between these locations. ______________________________________________________Reviewer 2 has indicated that the length of the methods should be reduced, but expects additions around describing the study area and Burn-P3 parameterization.We wanted to note that the Methods section includes the formulation and description of the Critical Node Detection model. The new model formulation is a result on its own but, in keeping with tradition, is presented in the Methods section. This explains the larger-than-normal section size. We believe that the CND model requires a detailed description to understand its principles, and so keeping the larger section size felt justified. However, the fire behaviour simulation model was already published previously in Reimer at al. (2019), so we have only included a brief summary of this model in the main text and moved the description of the its parameters to Supplement S1.______________________________________________________ Reviewer 1 has offered some specific suggestions for improvements in the methods section and the authors should consider moving some text to supplementary materials as suggested by Reviewer 2. Though I found the methods to be a lot to wade through, much of it is necessary in my mind.We agree that much of the descriptive material in the Methods section is necessary to understand how the model works. To reduce the section size, we have moved a portion of text describing the Burn-P3 model and a table with the model parameters to Supplement S1.______________________________________________________ 2. We note that Figure 3 in your submission contain map images which may be copyrighted. <\u2026>We require you to either (1) present written permission from the copyright holder to publish these figures specifically under the CC BY 4.0 license, or (2) remove the figures from your submission:Figure 3 is our own design. While it resembles the figure from Reimer et al. (2019), it was created in GIS from scratch using Burn-P3 inputs and data layers in the public domain, and therefore does not require permission to publish.______________________________________________________ Reviewer\u2019s 1 comments:Reviewer #1: General comments:First, I suggest you revisit the abstract to make sure it aligns with what you accomplished. Specific notes follow.We have edited the abstract to make it more succinct.______________________________________________________Fire behavior is a general term that includes spread but also intensity, duration, and type. In most cases, I think it would be clearer to say something like burn probability or spread likelihood instead in this paper.We have used fire spread in the text to avoid confusion. ______________________________________________________Does the problem 3 formulation with time add value to your analysis? As you point out, you do not actually remodel the fire spread probabilities. My personal view is that there is little value planning far in advance in fire-prone landscapes because stochastic wildfire activity will alter the priorities for subsequent periods more than the treatments.We agree with the reviewer about there being little value to long-term planning in fire-prone landscapes and perceive our approach as a short-term planning tool. Nevertheless, even short-term treatments are typically implemented in a stepwise fashion over a defined time period. The idea behind the problem 3 formulation is to account for the cumulative nature of multi-step planning per se; that is, the actions taken in the first step have the most impact on the system and may thus affect the actions taken in the subsequent steps. The primary difference between the problem 2 and 3 formulations is that the problem 3 formulation allocates the treatments with the greatest impact on fire spread probabilities first, followed by less impactful treatments. We have edited the text to make this point clearer.______________________________________________________If you are interested, other researchers have examined the effects of uncertainty in fire occurrence using two stage models solved with backwards induction:Konoshima M, Montgomery CA, Albers HJ, Arthur JL (2008) Spatial-endogenous fire risk and efficient fuel management and timber harvest. Land Economics 84(3), 449-468.Konoshima M, Albers HJ, Montgomery CA, Arthur JL (2010) Optimal spatial patterns of fuel management and timber harvest with fire risk. Canadian Journal of Forest Research 40, 95-108.These should probably be cited in the discussion when discussing multiple planning periods. The suggested references and a brief section of text referring to two-stage models have been added to the Introduction section:Konoshima et al. integrated a fire simulation model into a two-period stochastic dynamic model to find spatial allocations of timber harvest and fuel management in the face of spatially endogenous fire risk. Their approach used a fire simulation model to enumerate all possible fire occurrence patterns in all plausible treatment decisions and considered the trade-offs between fire risk, timber harvest value and fuel treatment cost.______________________________________________________I\u2019m surprised you advocate for problem 3 formulation in the discussion given that you say it produced similar results as problem 2 but with added complexity. Why?Problems 2 and 3 produced similar spatial results but different allocation of treatments in time. Problem 3 helps find the optimal sequence of treatments , whereas problem 2 does not address the issue of optimal timing because it solves only one CND network for an entire planning horizon. In a sense, problem 3 is more realistic because it provides guidance to forest managers about a stepwise implementation of the fuel reduction strategy.______________________________________________________If you had more space, I would recommend you compare your method to something simple and widely used, such as Finney\u2019s treatment optimization method or specific patterns of fuel treatments. When you start to describe the contexts that your problem 3 can be applied in , I start to wonder if you really diverge from a plan that interrupts the major spread paths of an anticipated problem fire scenario.Finney MA (2004) Chapter 9, Landscape fire simulation and fuel treatment optimization. In: J.L. Hayes, A.A. Ager, J.R. Barbour, (tech. eds). Methods for integrated modeling of landscape change: Interior Northwest Landscape Analysis System. PNW-GTR-610. p 117-131.Finney MA (2001) Design of regular landscape fuel treatment patterns for modifying fire growth and behavior. Forest Science 47(2), 219-228.The only limitation for problem 3 is high numerical complexity. The model can be applied to larger landscapes at a coarser spatial resolution. We have added a brief discussion comparing our approach with other methods \u2013 see our reply to the first comment from the Academic Editor.______________________________________________________ Specific comments:Lines 40-42: It is fair to point out that resources to mitigate wildfire risk are limited, and it is therefore important to prioritize, but there are many tools to assist forest managers in planning fuel treatments. I suggest dropping the focus on limited tools here. Whether the tools are used by managers is another issue.The text discussing limited tools has been dropped following the reviewer\u2019s suggestion.______________________________________________________ Line 42: I suggest replacing \u201cinterdiction\u201d with common language since you felt the need to define it in the paper.We have dropped the mention of interdiction \u2013 critical node detection is already a good description of the approach.______________________________________________________ Line 43: Did you really cover consequences in this work? I didn\u2019t see an effects analysis here.Yes \u2013 essentially, this is what the CND formulation in problems 1-3 does.______________________________________________________ Lines 44-45: How about \u201cWe used simulation modeling to estimate the likelihood of fire spread between forest network nodes and we\u2026\u201dThe text has been edited as suggested by the reviewer.______________________________________________________Line 46: fuel treatment ALLOCATION problemThis text fragment was deleted.______________________________________________________ Lines 64-68: Yes. Costs tend to rise with fire sizes, but rising suppression costs are often attributed to the expansion of human values into wildlands. It would be less controversial if you focused this statement only on the challenge of suppressing fires in rugged landscapes.We have edited the text to focus the statement on managing fires in rugged landscapes.______________________________________________________Lines 69-70: I think it is appropriate to at least mention that not all forestry and fire scientists agree that fuel treatments will reduce fire spread. Your Agee reference on shaded fuelbreaks includes discussion of this. Agee and Skinner (2005) and Reinhardt et al. (2008) argue strongly that most fuel treatments are, or should be, aimed at reducing fire intensity and severity.We agree with the reviewer that the focus should not entirely be placed on spread and have added references to fire severity. Although limiting fire spread defines the proposed strategies, we should have better acknowledged the benefits of fuel treatments in reducing fire intensity and severity with respect to the intended purpose of the fuel treatments. In addition to the probability of fire spread, our approach could incorporate other fire behaviour parameters (such as fire intensity) in conjunction with spread as long as such data could be generated with the fire simulation models. We added some explanatory text about this to the Discussion. ______________________________________________________Lines 70-72: Is the saving costs statement supported by these citations? It was a modeling study, but the Thompson et al. (2013) \u2018Quantifying the Potential Impacts of Fuel Treatments on Wildfire Suppression Costs\u2019 article provides the clearest estimates of how fuel treatments could reduce costs via their effects on fire sizes.We have added text referring to Thompson et al. (2013) as the reviewer suggested.______________________________________________________ Lines 76-77: Again, I think it is appropriate to acknowledge that some of these models were aimed at reducing the severity of effects instead of large fire spread.We have added text acknowledging that some models were also designed to reduce the severity of future fires in the landscape. ______________________________________________________ Lines 115-116: I would drop the subscripts here and save them for the methods.Done.______________________________________________________ Lines 127-130: Again, I think you should temper this statement to make it clear that it is more of an assumption supported by rules of thumb than a clear conclusion of the research. It is also prudent to acknowledge that fuel treatments do not generally achieve 100% protection, especially in the case of extreme weather. The Kalies and Kent (2016) review on fuel treatment effectiveness may be worth mentioning here.The statement has been edited to make clear that this assumption is a simplification and we acknowledge that fuel treatments are not usually 100% effective. A citation to Kalies and Kent (2016) has been added to the text.______________________________________________________ Line 134: You already introduced the CND abbreviation.Dropped.______________________________________________________ Line 156-157: Would it not be simpler to introduce the model as area limited since you don\u2019t account for variable costs? I see the future value of accounting for this, but it adds slight confusion to the paper. For example, you describe the model as having an upper bound for Rx fire area in the abstract and introduction.We have replaced the budget limit with a treatment area limit as suggested by the reviewer.______________________________________________________ Lines 183-185: And fire suppression?Edited.______________________________________________________ Line 197: \u201cdepict well\u201d to \u201crepresent\u201d?Edited______________________________________________________ Line 234: \u201cConsecutively\u201d or \u201cconsequently\u201d?Consequently.______________________________________________________ Lines 313-314: I do not think it is a good idea to use T and t for both steps in problem 2 and time periods in problem 3. I suggest changing a different letter for the steps in problem 2 to avoid confusion.We have denoted time periods and the full timespan in problem 3 as t\u2019 and T\u2019, respectively, while keeping t (a planning step) and T for problem 2. We opted to use the same letter to highlight the analogies between the problem 2 and problem 3 formulations.______________________________________________________ Lines 405-407: Rephrase for clarity.The sentence was deleted.______________________________________________________ Lines 437-442: You should clarify exactly how the information you mention was used. Were treatments limited to a particular vegetation type? Did you use the ignition probability, but not spread probability components of Burn P3? Is this later what you refer to as prioritizing on ignitions?No \u2013 this scenario only used a binary map of flammable / non-flammable land cover types and the flammable land classes were expected to support the spread of fires. Comparatively, the scenarios using fire behaviour information utilized fire spread probabilities pij calculated with the fire simulation model.The only scenario that used ignition probabilities instead of pij values was the solution that allocated treatments to minimize the ignition probability in Fig 8. We have edited the text to make this clearer.______________________________________________________Lines 443-452: This is where it would help to know the difference between T steps and T time periods.We have changed the notation for time periods in the problem 3 formulation to t\u2019 (and the notation for the full timespan to T\u2019) where appropriate. ______________________________________________________ Lines 464-474: As noted in my general comments, this is an important enough change in methods that you should clarify which results it applies to and describe it fully in the methods section instead of supplementary material referenced from the results.The calculation of the wij values is not related to the CND model per se \u2013 these values were used only to map the fire spread hotspots between multiple pairs of locations. Direct mapping of pij arcs makes the map cluttered and difficult to read. The new mapping procedure of fire spread probabilities is a novelty on its own and could help better understand the fire spread patterns in complex landscapes. The full description of the wij mapping procedure is beyond the scope of the current study \u2013 this is the focus of another manuscript. The text provides only basic description germane to understanding the fire spread probability maps. The text has been edited to make this aspect clearer.______________________________________________________ Lines 488-489: \u201cignoring spatial contiguity rules\u201d? or \u201cignoring the simulated connectivity measures\u201d?Ignoring spatial contiguity constraints for prescribed treatments \u2013 the sentence has been edited.______________________________________________________ Line 494: Suggest changing fire behavior to fire spread.Edited.______________________________________________________Lines 520-525: Why is this scenario suddenly popping up in the results? This should be introduced earlier with justification for what it tells you. Prescribed fires likely reduce ignition risk for a short period after treatment, but this will not last long as fuels reaccumulate. Reducing ignitions with rules and enforcement may require different methods in some landscapes.We have introduced a scenario minimizing ignition probabilities in the Methods section, after the description of the scenario based on land cover information only. In our case, we have only done a basic comparison of methods with the same treatment area. Implementing a practical enforcement scenario that reduces the probability of ignitions would require adapting both the CND and ignition-minimizing scenarios to the current practical standards and is considered as a theme for another manuscript.______________________________________________________Lines 579-582: I\u2019m confused about what scenarios are being compared here.These sentences were dropped to avoid confusion.______________________________________________________ Lines 598-611: I\u2019m wondering how much the small/large fire size tradeoff that is important at this site pertains to the use of probability vs. binary fireshed weighting versus the specific pattern of fire sizes and occurrence on your landscape? What do you think you would find on a landscape with high probability of fire spread from less frequent but large fires?Given that the model behaviour depends on the spatial configuration of fire spread patterns in the landscape, this question may best be answered by testing the CND model on real landscapes with high probabilities of large fires . This could be the focus of future work. To target a particular range of fire sizes, the pij value for each pair of locations i and j could be adjusted in the objective equation by a user-defined coefficient based on the distance between i and j. This would make the approach adaptable to other fire regime conditions and management objectives. The extent of this adaptability to different fire regimes will be examined in further studies.______________________________________________________Lines 615-619: Did the approximation you made to get at spread paths within fires really \u201caddress\u201d the problem? I don\u2019t have a brilliant solution to do better without complicating the simulation. I would be tempering my language here to reflect that some approximations were made to prototype a model framework.We did not use approximation to calculate the pij probabilities of fire spread between pairs of locations. As shown in Supplement 1, the pij calculations used raw fire simulation model outputs . A pij value only defines the probability that a fire ignited in location i will spread to location j, but does not specify how exactly the fire could spread from i to j; in short, the CND model does not require exact specification of the fire spread paths between i and j. In our study, we have used the ignition points and perimeters of the simulated fires, but did not track specific (and possibly dynamic) fire spread paths within individual fires. Tracking daily or hourly fire spread within individual fire perimeters could potentially refine the pij values, particularly for long spread distances, but would require a more sophisticated fire simulation model that can output the expansion of individual fires on an hourly or daily basis. This would require updating the Burn-P3 simulation model and could be a topic for future research.______________________________________________________Line 625: With a shortest path approximation\u2026No, there is no shortest path approximation in the calculation of the pij values. The pij values were calculated directly from raw fire simulation model outputs . The CND model only needs to know the probability that a fire ignited in location i will spread to location j and does not require specification of a spread path from i to j. Accounting for the presence of a path connecting a pair of locations i and j is handled by the CND model constraints (3) and (4). Note that we did use a shortest path approximation to calculate the wij values, but this metric is not used in the CND model and was utilized only for mapping the fire spread patterns.______________________________________________________ Figure 3: You should probably include a scale bar and north arrow in the study site panel.We have added a scale bar and north arrow to Figure 3.______________________________________________________ Reviewer\u2019s 2 comments: https://esajournals.onlinelibrary.wiley.com/doi/abs/10.1890/03-5210; Davies et al. 2015, http://dx.doi.org/10.1071/WF15055; Salis et al. 2018, https://www.sciencedirect.com/science/article/pii/S0301479718301191; Prichard et al. 2020 https://pubmed.ncbi.nlm.nih.gov/32086976/ doi:10.3390/f6062148). This is a shortcoming that can be improved.The Introduction section is well written and provides a generally good overview of the works that investigated this topic. I only have a remark. The authors limit the Introduction section focusing on previous works carried out in forest areas and using prescribed burnings, while the applicability of the approach they propose could be expanded also to semiarid or rural areas, as well as to fuel management strategies different than prescribed fires . Even if manuscripts published in Plos One can be any length, there is need to reduce this part and omit some redundant sentences. Some specific points to improve this section will be provided in later rows.We wanted to note that the methods section includes the formulation and description of the new Critical Node Detection model. The model formulation represents a new result on its own but is traditionally presented in the Methods section. The model formulation also required detailed explanations to understand its principles. Nevertheless, we have edited the Methods section, reducing textual descriptions wherever possible, eliminating redundancies and moving a portion of the Burn-P3 fire model description to Supplement S1.______________________________________________________ The Discussion section needs to be improved, as the comparison between the results and approach presented in this work are not compared with those obtained in other similar works.We have also added a short discussion describing the other treatment strategies \u2013 see our reply to the first comment from the Academic Editor.______________________________________________________Specific comments:https://esajournals.onlinelibrary.wiley.com/doi/abs/10.1890/03-5210; Davies et al. 2015, http://dx.doi.org/10.1071/WF15055; Salis et al. 2018, https://www.sciencedirect.com/science/article/pii/S0301479718301191; Prichard et al. 2020 https://pubmed.ncbi.nlm.nih.gov/32086976/ doi:10.3390/f6062148).Introduction: The Introduction section seem too much focused on works carried out in forest areas and application of prescribed burnings, while the approach proposed in this study could be expanded also to semiarid or rural areas, as well as to fuel management strategies different than prescribed fires .______________________________________________________L386-389: Please include the size of the study area, as well as the total size of the modeling domain .The study area corresponds to the size of the modelling domain (approximately 834 km2). The network included both a core area and a buffer area. We have noted the size of the study area in the text.______________________________________________________L399-402: Considering that no information is provided on crown fire and spot fire settings, I suppose the authors applied Burn-P3 model to simulate surface fire spread. In case the authors simulated crown fires and spot fires, I would recommend including more details on this.We\u2019ve added the following text explaining how Burn-P3 simulates fires: Burn-P3 fully implements the crown fire scheme of the Canadian Fire Behaviour Prediction System (FBP), modelling surface fires as well as the transition to crown fires (and the rate of crown fire spread itself). We have also provided a short Burn-P3 summary in the online Supplement S1 along with the key model parameters: The critical weather conditions under which the transition from surface to crown fire occurs are dependent on the fuel type. While spot fires are not discretely modelled within the FBP System, the empirical rate of spread equations are based on wildfire observation data for high-intensity crown fires, thus effectively incorporating the role of spot fires and ember transport into the rate of spread models.______________________________________________________ L408-410: Please include a table, in the Supplementary data, to summarize the main input data used for fire simulations .We have added a short summary and a table summarizing the main inputs for Burn-P3 simulations to online Supplement S1.______________________________________________________L435-439: Problems 1-4 related to the critical nodes detection (CND) were introduced in the first equations, several pages before this part. I would recommend helping readers and clarifying that these problems refer to the first equations and the detection of critical nodes.We have edited the text naming problems 1-3.______________________________________________________L533: Starting a sentence with \u201cRecall that\u201d might be inappropriate, please checkRemoved.______________________________________________________L613-666: The Discussion section summarizes relatively well the principles and generalizations from results as well as the significance of results. On the other hand, it does not discuss the results and methods presented in this work in relation to those of others. This is a limitation, so I recommend improving the Discussion in this sense.We have added text relating the presented method to other planning fuel treatment methods to Discussion section \u2013 see our reply to the first comment from the Academic Editor. To avoid repetitions and keep the size of the manuscript reasonable we have compared our approach with the most common methods that use site-based fire hazard metrics.______________________________________________________ Figures 3-4: Please include the scale bar.We have added scale bars to Figures 3 and 4.Attachmentresponse_to_comments_v2.docSubmitted filename: Click here for additional data file. 1 Sep 2021PONE-D-21-15640R1DETECTING CRITICAL NODES IN FOREST LANDSCAPE NETWORKS TO REDUCE WILDFIRE SPREADPLOS ONEDear Dr. Yemshanov,Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE\u2019s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.Reviewer 1 has expressed that the manuscript can be accepted pending some minor revisions.\u00a0Reviewer 2 was unavailable to review the revision, so I reviewed the authors\u2019 revisions based on the reviewer\u2019s comments and found that all issues have been adequately addressed.\u00a0Lines\u00a0326-328, \u201cIn general, long-term planning in fire-prone landscapes has little utility because stochastic wildfire activity may override the long-term treatment plans.\u201d is very poor phrasing and inaccurate. Long term planning is a hallmark of forest management generally and there is good work being undertaken at a range of temporal scales that\u00a0specifically\u00a0address stochasticity in fire regimes. There is no need for this baseless claim and it should be removed.plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.Please submit your revised manuscript by Oct 16 2021 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at Please include the following items when submitting your revised manuscript:A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at\u00a0https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see:\u00a0We look forward to receiving your revised manuscript.Kind regards,Paul Pickell, Ph.D.Academic EditorPLOS ONEJournal Requirements:Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article\u2019s retracted status in the References list and also include a citation and full reference for the retraction notice.[Note: HTML markup is below. Please do not edit.]Reviewers' comments:Reviewer's Responses to QuestionsComments to the Author1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the \u201cComments to the Author\u201d section, enter your conflict of interest statement in the \u201cConfidential to Editor\u201d section, and submit your \"Accept\" recommendation.Reviewer #1:\u00a0(No Response)**********2. Is the manuscript technically sound, and do the data support the conclusions?The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1:\u00a0Yes**********3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1:\u00a0Yes**********4. Have the authors made all data underlying the findings in their manuscript fully available?PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data\u2014e.g. participant privacy or use of data from a third party\u2014those must be specified.The Reviewer #1:\u00a0Yes**********5. Is the manuscript presented in an intelligible fashion and written in standard English?PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.Reviewer #1:\u00a0Yes**********6. Review Comments to the AuthorPlease use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. Reviewer #1:\u00a0I am pleased with the revisions. Thank you for clarifying your methods and for explaining where you think the multi-year planning scenario adds value. I still think the method of defining spread probabilities between nodes using the shortest paths from ignition points to perimeters should be framed as an approximation, but this is not central to your work, and the detailed spread paths may not matter much if the \u201cnodes\u201d are large.A few minor writing suggestions:L49-51: I would edit this sentence as: \u201cOur results provide new insights into cost-effective planning to mitigate wildfire risk in forest landscapes. The approach should be applicable to other ecosystems with frequent wildfires.\u201dL64: Drop \u201cin places\u201d?L78: \u201cfuel treatments\u201d instead of \u201cfuel treatment measures\u201dL132: Drop \u201carea\u201dL430-436: Are these sentences necessary after you simplified to an area/node count limit?**********what does this mean?). If published, this will include your full peer review and any attached files.7. PLOS authors have the option to publish the peer review history of their article digital diagnostic tool,\u00a0 3 Sep 2021Manuscript PONE-D-21-15640R1 \u201cDETECTING CRITICAL NODES IN FOREST LANDSCAPE NETWORKS TO REDUCE WILDFIRE SPREAD\u201d \u2013 A Reply to Reviewers\u2019 Comments:Academic Editor:Lines 326-328, \u201cIn general, long-term planning in fire-prone landscapes has little utility because stochastic wildfire activity may override the long-term treatment plans.\u201d is very poor phrasing and inaccurate. Long term planning is a hallmark of forest management generally and there is good work being undertaken at a range of temporal scales that specifically address stochasticity in fire regimes. There is no need for this baseless claim and it should be removed.We deleted this text as suggested by the Editor.__________________________________Reviewer #1: I am pleased with the revisions. Thank you for clarifying your methods and for explaining where you think the multi-year planning scenario adds value. I still think the method of defining spread probabilities between nodes using the shortest paths from ignition points to perimeters should be framed as an approximation, but this is not central to your work, and the detailed spread paths may not matter much if the \u201cnodes\u201d are large.We want to clarify that the calculation of fire spread probabilities pij between pairs of nodes (which were used in our optimization model) did not involve a shortest path approximation. Recall that the spread probability value pij depicts the likelihood that a fire ignited in node i spreads to node j without indication of how the fire might spread from i to j. Information about the ignition locations came from the geographic coordinates for individual fires simulated by the Burn-P3 model, and then we used the perimeters of the simulated fires to identify the locations j to which a particular fire ignited in i could spread.The only place where we utilized a shortest path approximation was in the calculation of the spread probabilities wij between adjacent nodes for visualizing the fire spread patterns. For each simulated fire, we used the shortest path approximation to project the possible fire spread path (with the probability pij) between locations i and j over the network of arcs connecting the adjacent nodes. This map only served to illustrate the fire spread patterns and was not used in optimization modelling. We have edited two sections of the main text, \u201cCalculating the fire spread probabilities pij\u201d and \u201cMapping the fire spread probabilities\u201d, as well as Supplement S4 to make this aspect clearer.__________________________________A few minor writing suggestions:L49-51: I would edit this sentence as: \u201cOur results provide new insights into cost-effective planning to mitigate wildfire risk in forest landscapes. The approach should be applicable to other ecosystems with frequent wildfires.\u201dEdited as the reviewer suggested.__________________________________L64: Drop \u201cin places\u201d?Dropped.__________________________________L78: \u201cfuel treatments\u201d instead of \u201cfuel treatment measures\u201dCorrected.__________________________________L132: Drop \u201carea\u201dDropped.__________________________________L430-436: Are these sentences necessary after you simplified to an area/node count limit?These sentences provide background about why we simplified the budget calculations to an area/node count limit and so should stay in the text. Note that the budget constraint [2] would require a variable cost component if the site treatments were solely managed by ground crews, in which case the total treatment cost would depend on the time required to access the treatment sites (which could be a function of complex terrain and proximity to roads).Attachmentresponse_to_comments_Sept_v2.docSubmitted filename: Click here for additional data file. 17 Sep 2021DETECTING CRITICAL NODES IN FOREST LANDSCAPE NETWORKS TO REDUCE WILDFIRE SPREADPONE-D-21-15640R2Dear Dr. Yemshanov,We\u2019re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.Within one week, you\u2019ll receive an e-mail detailing the required amendments. When these have been addressed, you\u2019ll receive a formal acceptance letter and your manuscript will be scheduled for publication.http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at onepress@plos.org.If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they\u2019ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact Kind regards,Paul Pickell, Ph.D.Academic EditorPLOS ONEAdditional Editor Comments :Reviewers' comments: 21 Sep 2021PONE-D-21-15640R2 DETECTING CRITICAL NODES IN FOREST LANDSCAPE NETWORKS TO REDUCE WILDFIRE SPREAD Dear Dr. Yemshanov:I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department. onepress@plos.org.If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact plosone@plos.org. If we can help with anything else, please email us at Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staffon behalf ofDr. Paul Pickell Academic EditorPLOS ONE"} +{"text": "MSTN) negatively regulates muscle development and positively regulates metabolism through various pathways. Although MSTN function in cattle has been widely studied, the changes in the gut microbiota due to MSTN mutation, which contribute to host health by regulating its metabolism, remain unclear. Here, high-throughput sequencing of the 16S rRNA gene was conducted to analyze the gut microbiota of wild-type (WT) and MSTN mutant (MT) cattle. A total of 925 operational taxonomic units (OTUs) were obtained, which were classified into 11 phyla and 168 genera. Alpha diversity results showed no significant differences between MT and WT cattle. Beta diversity analyses suggested that the microbial composition of WT and MT cattle was different. Three dominant phyla and 21 dominant genera were identified. The most abundant bacterial genus had a significant relationship with the host metabolism. Moreover, various bacteria beneficial for health were found in the intestines of MT cattle. Analysis of the correlation between dominant gut bacteria and serum metabolic factors affected by MSTN mutation indicated that MSTN mutation affected the metabolism mainly by three metabolism-related bacteria, Ruminococcaceae_UCG-013, Clostridium_sensu_stricto_1, and Ruminococcaceae_UCG-010. This study provides further insight into MSTN mutation regulating the host metabolism by gut microbes and provides evidence for the safety of gene-edited animals.Myostatin ( Yellow cattle are a characteristic resource of China, having 52 breeds, of which Qinchuan, Luxi, Nanyang, Jinnan, and Yanbian cattle have been domesticated and bred for thousands of years. They contain rich genetic resources and have rough feeding tolerance, strong stress resistance ability, strong adaptability, and tender meat. However, due to their long-term use as service cattle, common defects in these cattle, such as slow growth rate, underdeveloped rear-drive, slow fattening, weight gain, and low carcass production, cannot meet the requirements of international beef cattle.MSTN), a member of the transforming growth factor \u03b2 (TGF-\u03b2) superfamily, is highly expressed in skeletal muscle tissue and negatively regulates muscle growth _coprostanoligenes_group, Clostridium_sensu_stricto_1, uncultured_bacterium_f_Ruminococcaceae, Ruminococcaceae_UCG-014, uncultured_bacterium_f_Muribaculaceae, Alistipes, Prevotellaceae_UCG-003, uncultured_bacterium_o_Mollicutes_RF39, Ruminococcaceae_UCG-013, dgA-11_gut_group, Bacteroides, uncultured_bacterium_f_Lachnospiraceae, Prevotellaceae_UCG-004, uncultured_bacterium_o_Clostridiales, uncultured_bacterium_f_p-2534-18B5_gut_group, and uminococcaceae_UCG-009 in WT cattle (1.23 \u00b1 0.16%); however, its relative abundance in MT cattle was <1% (0.80 \u00b1 0.05%).The bacterial composition of the samples was analyzed at the genus level. A total of 168 bacterial genera from the six fecal samples were identified, of which 21 showed an average relative abundance above 1%. These were _UCG-009 . The domMSTN mutation affects the gut bacterial communities of the cattle. Substantial differences were observed in gut flora between WT and MT cattle. At the genus level, the presence of Caproiciproducens, Erysipelatoclostridium, Prevotellaceae_Ga6A1_group, uncultured_bacterium_f_Bifidobacteriaceae, uncultured_bacterium_o_Clostridiales, uncultured_bacterium_f_Erysipelotrichaceae, Acetanaerobacterium, Aeriscardovia, Candidatus_Saccharimonas, Bifidobacterium, Sphingomonas, and Rikenellaceae_RC9_gut_group were substantially higher in MT cattle than in WT cattle. However the presence of Ruminococcaceae_UCG-013, Clostridium_sensu_stricto_1, Solibacillus, Lysinibacillus, Ruminococcaceae_UCG-009, Family_XIII_AD3011_group, Paraclostridium, Blautia, Porphyromonas, uncultured_bacterium_f_Christensenellaceae, Terrisporobacter, Pseudoflavonifractor, XBB1006, Paeniclostridium, Ruminococcaceae_UCG-004, [Eubacterium]_nodatum_group, and uncultured_bacterium_o_Rhodospirillales were substantially lower in MT cattle than in WT cattle (p < 0.05).The gut microbiota composition of WT and MT cattle was analyzed to identify whether T cattle (LDA > 2The functions of the bacteria with significant abundance (>1% in each group) of the MT and WT cattle were predicted using the KEGG database. The primary functions of these bacteria were related to metabolism . SeveralClostridium_sensu_stricto_1 and Ruminococcaceae_UCG-009 (p < 0.05) were significantly positively correlated to AST. Clostridium_sensu_stricto_1 (p < 0.05) showed a significant positive relationship with ALT. Clostridium_sensu_stricto_1, Rikenellaceae_RC9_gut_group, Ruminococcaceae_UCG-013, and Ruminococcaceae_UCG-009 (p < 0.05) were significantly correlated to AMY, of which Rikenellaceae_RC9_gut_group was negatively correlated, whereas the others were positively correlated. Rikenellaceae_RC9_gut_group, Ruminococcaceae_UCG-009, and Ruminococcaceae_UCG-013 (p < 0.05) were significantly related to LDHL, of which Rikenellaceae_RC9_gut_group was negatively correlated, whereas the others were positively correlated. Rikenellaceae_RC9_gut_group, Ruminococcaceae_UCG-013, and Ruminococcaceae_UCG-009 (p < 0.05) were significantly related to LACT, of which Rikenellaceae_RC9_gut_group were negatively correlated, and the others were positively correlated.The relevance between microbial species abundances and serum biochemical indicators was examined to identify marker species linking microbiota and serum biochemical indicators. Spearman\u2019s rank correlation coefficients between abundant microbial species (>1% in each group) and serum biochemical indicators of the MT and WT cattle were determined . ClostriMSTN, a TGF-\u03b2 superfamily member, functions as a negative regulator during muscle growth in many species [MSTN expression for achieving muscle-improved animals [MSTN knockout animals have developed skeletal muscle development. Continued in-depth study of regulation mechanisms has shown that MSTN regulates the metabolism of the host, including muscle development, fat metabolism [ species \u20136. There animals \u201325. To otabolism , glucosetabolism , 9, bonetabolism , and othMSTN mutation was discussed. This study provides a research basis for evaluating the health status of MT cattle intestinal microflora and the effect of MSTN mutation on the gut microbial composition.The bacterial microbiota in the host plays vital roles in the energy and nutrition metabolism, reproduction, and immune homeostasis of the host . ConcernMSTN mutation had no significant influence on the richness and diversity of gut microbiota, which is consistent with the results on the MT pigs [MSTN mutation influenced the gut microbiota.The diversity and richness of the observed OTUs showed no significant difference between the fecal samples of MT and WT cattle, indicating that the MT pigs . NMDS baMSTN mutation promoted fat metabolism and inhibited its formation as reported before [MSTN mutations can regulate metabolism by regulating changes in the gut flora community.At the phylum level, 11 phyla in the six fecal samples were identified, and Firmicutes, Bacteroidetes, and Tenericutes showed an average relative abundance above 1%. Firmicutes and Bacteroidetes were core microbiota of both MT and WT cattle, which is consistent with the results on MT pigs . Previoud before . In otheRuminococcaceae_UCG-005 [Ruminococcaceae_UCG-014, Ruminococcaceae_UCG-010, Ruminococcaceae_UCG-013, uncultured_bacterium_o_Mollicutes_RF39 [uncultured_bacterium_o_Clostridiales [Ruminococcaceae_UCG-005, Rummeliibacillus [uncultured_bacterium_f_Ruminococcaceae, and Prevotellaceae_UCG-004 [Bacteroides, associated with human diseases, such as colorectal inflammation [Bacteroides are a commonly occurring flora in the living environment of animals [At the genus level, 168 genera in the six fecal samples were identified, of which 21 showed an average relative abundance above 1%. Most of them were related to metabolism by function prediction, of which some were butyrate-producing gut bacteria, such as _UCG-005 , Ruminoctes_RF39 , and uncridiales ; others bacillus , uncultu_UCG-004 . These bammation , were ob animals , indicatCaproiciproducens [Acetanaerobacterium [Sphingomonas [uncultured_bacterium_f_Bifidobacteriaceae [Aeriscardovia [Bifidobacterium [MSTN mutation leads to a decrease in fat content and an increase in the lean meat rate, which enhances the metabolism efficiency and reduces type 2 diabetes risk [Pseudoflavonifractor, a type 2 diabetes-related flora, was decreased in MT cattle, indicating that MSTN can regulate metabolism by regulating intestinal flora. Some flora associated with intestinal inflammation was increased in MT cattle, such as Erysipelatoclostridium [Prevotellaceae_Ga6A1_group [Candidatus_Saccharimonas [Paraclostridium, Porphyromonas, Terrisporobacter, and Paeniclostridium. Therefore, in this study, the effect of MSTN mutation on intestinal inflammation could not be determined. The results in MT pigs showed that MSTN mutation leads to a relative reduction in the inflammatory response [The abundance of some bacteria was significantly higher in MT cattle than those in WT cattle. Most of the abundant bacteria such as roducens , Acetanaacterium , and Sphngomonas are relaeriaceae , Aeriscacardovia , and Bifacterium , which pacterium , and ourtes risk . In our stridium , PrevoteA1_group , and Canarimonas , whereasresponse ; howeverMSTN mutation was correlated with serum metabolic factors. All serum biochemical factors were related to enhanced metabolism, indicating that MSTN mutation leads to increased metabolism, which was consistent with a previous study [Rikenellaceae_RC9_gut_group was positively correlated with HFD-induced \u201charmful indicators\u201d and negatively correlated with \u201cbeneficial indicators\u201d [Rikenellaceae_RC9_gut_group was negatively correlated with AMY, LDHL, and LACT, suggesting that MSTN mutation was negatively correlated with HFD-induced \u201charmful indicators\u201d and positively correlated with \u201cbeneficial indicators.\u201d In addition, Clostridium_sensu_stricto_1 was positively correlated with AST, ALT, and AMY, Ruminococcaceae_UCG-013 was positively correlated with AMY, LDHL, and LACT, and Ruminococcaceae_UCG-010 was positively correlated with AST, AMY, LDHL, and LACT. The three abovementioned bacteria were all indicators for improving metabolic intensity [MSTN mutation and indicates that MSTN mutation mainly affected the metabolism by regulating these three bacteria.In this study, we found that us study . Previouicators\u201d . In our ntensity , which iMSTN mutation had no remarkable influence on the diversity and richness of the gut microbiota in Luxi cattle. However, MSTN mutation influenced the composition of gut microbiota. The most abundant bacterial genus had a significant relationship with host metabolism. Moreover, the abundance of microorganisms beneficial for the health in the intestine of MT cattle was higher than in the WT cattle. Analysis of the correlation between bacteria and serum metabolic factors affected by MSTN mutation indicated that MSTN mutation affected the metabolism mainly by three metabolism-related bacterial, Ruminococcaceae_UCG-013, Clostridium_sensu_stricto_1, and Ruminococcaceae_UCG-010. The findings of this study provide further insight into MSTN mutation regulating host metabolism via gut microbes and theoretical evidence for the connection between MSTN and gut microbes. In addition, this study demonstrates a novel way to evaluate the safety of gene-edited animals.In this study, high-throughput sequencing of the 16S rRNA gene was performed to analyze fecal samples of WT and MT cattle. S1 Fig(DOCX)Click here for additional data file.S1 Appendix(GZ)Click here for additional data file.S2 Appendix(GZ)Click here for additional data file.S3 Appendix(GZ)Click here for additional data file.S4 Appendix(GZ)Click here for additional data file.S5 Appendix(GZ)Click here for additional data file.S6 Appendix(GZ)Click here for additional data file.S7 Appendix(GZ)Click here for additional data file.S8 Appendix(GZ)Click here for additional data file.S9 Appendix(GZ)Click here for additional data file.S10 Appendix(GZ)Click here for additional data file.S11 Appendix(GZ)Click here for additional data file.S12 Appendix(GZ)Click here for additional data file.S1 Raw images(PDF)Click here for additional data file."} +{"text": "Molecular design and evaluation for drug development and chemical safety assessment have been advanced by quantitative structure\u2013activity relationship (QSAR) using artificial intelligence techniques, such as deep learning (DL). Previously, we have reported the high performance of prediction models molecular initiation events (MIEs) on the adverse toxicological outcome using a DL-based QSAR method, called DeepSnap-DL. This method can extract feature values from images generated on a three-dimensional (3D)-chemical structure as a novel QSAR analytical system. However, there is room for improvement of this system\u2019s time-consumption. Therefore, in this study, we constructed an improved DeepSnap-DL system by combining the processes of generating an image from a 3D-chemical structure, DL using the image as input data, and statistical calculation of prediction-performance. Consequently, we obtained that the three prediction models of agonists or antagonists of MIEs achieved high prediction-performance by optimizing the parameters of DeepSnap, such as the angle used in the depiction of the image of a 3D-chemical structure, data-split, and hyperparameters in DL. The improved DeepSnap-DL system will be a powerful tool for computer-aided molecular design as a novel QSAR system. Quantitative structure\u2013activity relationship (QSAR) models can reduce the time and cost of molecular screening through mathematical prediction models of regression or classification of properties and activities of a chemical compound based on their chemical structure and statistically significant corresponding physicochemical/toxicological properties with other methods such as homology modeling, molecular docking, and molecular dynamics (MD) simulation ,32,33,34receiver operating characteristic area under the curve (ROC_AUC) of the prediction models for 59 MIE targets in validation, test, and foldout datasets indicated 0.818 \u00b1 0.056, 0.803 \u00b1 0.063, and 0.792 \u00b1 0.076, respectively [A DL-based QSAR system, called DeepSnap-DL, was reported to capture molecular features from molecular images photographed on a 3D-chemical structure . In the ectively . Furtherectively .transforming growth factor (TGF)-beta/Smad (PubChem assay AID:1347032_TGF_beta_ant), and thyrotropin-releasing hormone receptor (PubChem assay AID:1347030_TRHR_ago), by optimizing parameters in the DeepSnap-DL system. According to the previously reported MIE molecules, agonist, or antagonist prediction models in the three MIE molecules constructed using the modified DeepSnap-DL with Python showed that it would be essential tools in a novel QSAR system in computer-aided molecular design.In this study, we used the modified DeepSnap-DL with Python and basic DeepSnap-DL with DIGITS systems to construct prediction models in three of MIEs, glucocorticoid receptor (PubChem assay AID:720725_GR_ant), To analyze the influence of different angles on the snapshot generation of DeepSnap_Python and DeepSnap_DIGITS as 256 \u00d7 256 pixel PNG files, we used 31 and 23 from 65\u00b0, 65\u00b0, 65\u00b0 to 350\u00b0, 350\u00b0, 350\u00b0 in Python and from 70\u00b0, 70\u00b0, 70\u00b0 to 345\u00b0, 345\u00b0, 345\u00b0 in DIGITS of 720725_GR_ant, 15 and 17 from 95\u00b0, 95\u00b0, 95\u00b0 to 325\u00b0, 325\u00b0, 325\u00b0 in Python and from 95\u00b0, 95\u00b0, 95\u00b0 to 355\u00b0, 355\u00b0, 355\u00b0 in DIGITS of 1347030_TRHR_ago, 16 and 16 from 75\u00b0, 75\u00b0, 75\u00b0 to 350\u00b0, 350\u00b0, 350\u00b0 in Python and from 75\u00b0, 75\u00b0, 75\u00b0 to 350\u00b0, 350\u00b0 350\u00b0 in DIGITS of 1347032_TGF_beta_ant, different angles . AdditioAs results, DeepSnap_Python and DeepSnap_DIGITS in the three MIE targets achieved the following prediction-performance. The mean ROC_AUC, BAC, MCC, and Acc values in the valid dataset were 0.832 \u00b1 0.048 for ROC_AUC_Python in 720725_GR_ant, 0.856 \u00b1 0.029 for ROC_AUC_DIGITS in 720725_GR_ant, 0.875 \u00b1 0.031 for ROC_AUC_Python in 1347030_TRHR_ago, 0.886 \u00b1 0.028 for ROC_AUC_DIGITS in 1347030_TRHR_ago, 0.879 \u00b1 0.015 for ROC_AUC_Python in 1347032_TGF_beta_ant, 0.907 \u00b1 0.020 for ROC_AUC_DIGITS in 1347032_TGF_beta_ant, 0.762 \u00b1 0.044 for BAC_Python in 720725_GR_ant, 0.791 \u00b1 0.023 for BAC_DIGITS in 720725_GR_ant, 0.811 \u00b1 0.032 for BAC_Python in 1347030_TRHR_ago, 0.829 \u00b1 0.023 for BAC_DIGITS in 1347030_TRHR_ago, 0.805 \u00b1 0.015 for BAC_Python in 1347032_TGF_beta_ant, 0.849 \u00b1 0.030 for BAC_DIGITS in 1347032_TGF_beta_ant, 0.248 \u00b1 0.065 for MCC_Python in 720725_GR_ant, 0.282 \u00b1 0.030 for MCC_DIGITS in 720725_GR_ant, 0.141 \u00b1 0.017 for MCC_Python in 1347030_TRHR_ago, 0.155 \u00b1 0.022 for MCC_DIGITS in 1347030_TRHR_ago, 0.309 \u00b1 0.025 for MCC_Python in 1347032_TGF_beta_ant, 0.384 \u00b1 0.044 for MCC_DIGITS in 1347032_TGF_beta_ant, and 0.790 \u00b1 0.058 for Acc_Python in 720725_GR_ant, 0.812 \u00b1 0.044 for Acc_DIGITS in 720725_GR_ant, 0.781 \u00b1 0.030 for Acc_Python in 1347030_TRHR_ago, 0.769 \u00b1 0.060 for Acc_DIGITS in 1347030_TRHR_ago, 0.770 \u00b1 0.029 for Acc_Python in 1347032_TGF_beta_ant, 0.833 \u00b1 0.033 for Acc in 1347032_TGF_beta_ant, respectively .The highThe highest prediction performance values of PR_AUC on the valid dataset for the angles and data-split ratios were 0.660 at 176\u00b0 and train:valid:test = 7:1:2 in AID:720725_GR_ant, 0.194 at 176\u00b0 and train:valid:test = 3:1:2 in 1347030_TRHR_ago, and 0.453 at 176\u00b0 and train:valid:test = 3:1:1 in 1347032_TGF_beta_ant ; Table 3These findings suggested that image augmentation is effectively worked. It has been reported that even though a small number of images was used, the DL can classify by increasing the number of images with the addition of artificial operations, such as movement, rotation, enlargement/reduction, and inversion to the original images ,56. In aHowever, since the image will be similar to the original image, the risk of overfitting, i.e., a decrease in the performance on the test dataset due to the prediction model fitting to match into the training dataset, cannot be ruled ,66,67,68To investigate the effect of hyperparameters in DeepSnap-DL with Python system on prediction-performance values of the three MIE targets, we optimized 39 LRs from 0.004 to 0.0000001 in 720725_GR_ant, 24 LRs from 0.007 to 0.000001 in 1347030_TRHR_ago, and 38 LRs from 0.002 to 0.000001 in 1347032_TGF_beta_ant using the valid dataset . The meaDeepSnap_Python in the three MIE targets achieved the following prediction-performance values of loss, PR_AUC, and F. The mean loss values on the train and valid datasets were 0.215 \u00b1 0.231 for loss_train in 720725_GR_ant and 0.263 \u00b1 0.186 for loss_valid in 720725_GR_ant, 0.098 \u00b1 0.062 for loss_train in AID: 1347030_TRHR_ago and 0.122 \u00b1 0.058 for loss_valid in 1347030_TRHR_ago, 0.125 \u00b1 0.110 for loss_train in 1347032_TGF_beta_ant and 0.236 \u00b1 0.062 for loss_valid in 1347032_TGF_beta_ant , Table 4Furthermore, the lowest prediction-performance values of loss on the train and valid datasets for the LRs were 0.022 at 0.00003 and 0.124 at 0.00003 in 720725_GR_ant, 0.020 at 0.00002 and 0.066 at 0.0008 in 1347030_TRHR_ago, 0.038 at 0.00003 and 0.170 at 0.000021 in 1347032_TGF_beta_ant , Table 4Finally, to investigate the effect of BS in the improved DeepSnap-DL with Python system on prediction-performance values, we optimized 84 BSs from 2 to 300 in 720725_GR_ant, 13 LRs from 2 to 26 in 1347030_TRHR_ago, and 37 LRs from 2 to 80 in 1347032_TGF_beta_ant using the valid dataset . The meaThe highest prediction-performance values of ROC_AUC on the test dataset for BS were 0.983 at 125 in 720725_GR_ant, 0.934 at 14 in 1347030_TRHR_ago, 0.925 at 28 in 1347032_TGF_beta_ant , Table 5Additionally, DeepSnap_Python in the three MIE targets achieved the following prediction-performance values of loss, PR_AUC, and F. The mean loss values on the train and test datasets were 0.045 \u00b1 0.033 for loss_train in 720725_GR_ant and 0.119 \u00b1 0.025 for loss_test in 720725_GR_ant, 0.322 \u00b1 0.013 for loss_train in 1347030_TRHR_ago and 0.314 \u00b1 0.022 for loss_test in 1347030_TRHR_ago, 0.097 \u00b1 0.047 for loss_train in 1347032_TGF_beta_ant and 0.203 \u00b1 0.023 for loss_test in 1347032_TGF_beta_ant , Table 5As a method often used to improve the generalization performance of DL, LR decay, meaning to lower LR in places where learning has progressed to some extent, is known to improve accuracy sharply . HoweverIt was previously reported that BS and LR are proportional, whereas BS and momentum coefficient are inversely proportional . It is cThese findings are expected to lead to drug development from the estimation and identification of new ligands for nuclear receptors.The datasets of three MIE targets, including antagonists of the glucocorticoid receptor (PubChem assay AID:720725_GR_ant), TGF-beta/Smad (PubChem assay AID:1347032_TGF_beta_ant), and agonist of the thyrotropin-releasing hormone receptor (PubChem assay AID:1347030_TRHR_ago) for the chemical structures in SMILES format and the corresponding agonist or antagonist scores defined as Pubchem_activity_scores from the Tox21 10K library in the PubChem database housing quantitative high-throughput assays to identify small molecule agonists and antagonists for MIEs, as previously reported, were downloaded ,52,53,54https://www.mn-am.com/products/corina, accessed on 25 January 2022) was used to determine a suitable form of each chemical structure. The 3D chemical structures of the compounds from SDF files were depicted as 3D ball-and-stick models with different colors corresponding to different atoms by a Jmol, an open-source Java viewer software for 3D molecular modeling of chemical structures [x-, y-, and z-axes saved as 256 \u00d7 256-pixel resolution PNG files (RGB) and split into three train, valid, and test datasets, as previously reported [We applied the SMILES format for 3D conformational import to generate the 3D chemical database with rotatable torsion and saved it as a structure data file (SDF) using molecular operating environment (MOE) 2018 scientific applications . Then, the external program, CORINA classic software , were divided into the train, valid, and test datasets. Additionally, the external test dataset is permanently fixed. TensorFlow and Keras on CentOS Linux 7.3.1611 with the CNN GoogLeNet were used all 2D PNG images produced by the DeepSnap-DL-Python system for training and fine-tuning the prediction models. Background colors in the images were changed to the color values in PyMOL, where a force field, which is a set of parameters for the bond lengths, angles, torsional parameters, electrostatic properties, and van der Waals interactions, uses the Merck Molecular Force Field (MMFF) [The improved DeepSnap-DL-Python system used a new 3D conformational import application, called SMILES_TO_SDF, to produce the SDF files from the SMILES format. We used PyMOL, an open-source molecular visualization system written in the Python programming language , to obtain high-quality 3D molecular modeling of chemical structures with 3D ball-and-stick models with different colors corresponding to different atoms. The 3D chemical structures can produce different images depending on the direction. They are captured automatically by DeepSnap as snapshots with user-defined angle increments with respect to the d (MMFF) .x, y, and z axes. The prediction models of the three MIE targets were constructed using these images of the 3D chemicals as input data for the DIGITS-based DL. Another system that is modified DeepSnap-DL by TensorFlow and Keras with Python was used. The SMILES format was used for a new 3D application, called SMILES_TO_SDF, to produce high-quality 3D molecular modeling of the chemical structures saved as a chemical database in SDF format. 2D PNG images produced from the SDF file were produced by DeepSnap, and the prediction models were constructed using these images as input data by DL with TensorFlow and Keras, called DeepSnap-DL-Python.Next, using the structural information for these chemicals derived from the SMILES format, the 3D chemical structure per compound with \u201crotatable torsions\u201d was depicted using MOE application software program, and optimized to generate a single low energy conformation using CORINA classic software. These 3D chemical structures were saved in SDF format as a database file. Then, molecular images were generated as snapshots of the 3D structure from the SDF file using the DeepSnap method at different angles along the \u00ae Pro. 14 , as previously reported [x-, y-, and z-axes directions for one molecule. Classification performance was evaluated based on a confusion matrix defined by the cutoff value (\u03b8) from the Youden\u2019s Index (YI) as follows [k is the diagnostic categories, wj \u2208 .We analyzed the probability of the prediction results using the prediction model with the lowest minimum loss in valid value among 30 examined echoes using the DeepSnap-DL-DIGITS method. We used the medians of each predicted value as representative values for target molecules using statistical analysis software JMPreported ,51,52, b follows ,78,79:YIHowever, the DeepSnap-DL-Python system automatically obtains the probability of prediction results with the lowest minimum loss_valid value among 30 examined epochs, which are the numbers of repeats for one training dataset modulated by early stopping. Additionally, the performance of each model was automatically calculated in terms of the metrics: ROC_AUC, precision recall_AUC (PR_AUC), balanced accuracy (BAC), F, Matthew\u2019s correlation coefficient (MCC), accuracy (Acc), and loss. These performance metrics are defined as follows. Here TP, FN, TN, and FP denote true positive, false negative, true negative, and false positive, respectively.To determine the optimal cutoff point for the definition of TP, FN, TN, and FP, we adopted a method for maximizing the sensitivity (1\u2014specificity), called YI. This index has a value ranging from 0 to 1, where 1 represents the maximum effectiveness, and 0 represents the minimum effectiveness. Additionally, the area under the curve (AUC) for the receiver operating characteristics (ROC) is given byf, j iterates over the true points, Np is the number of true points, T is the number of thresholds, and prect is the precision at threshold t. For broader cases, let prec0 = prec1, and precT = 0 [Here, ROC_AUC denotes AUC recT = 0 . The PR recT = 0 ,54. ThisIn this study, we constructed prediction models for antagonists of the glucocorticoid receptor, TGF-beta/Smad, and agonist of the thyrotropin-releasing hormone receptor using the classic DeepSnap-DL system with DIGITS and improved DeepSnap-DL system with TensorFlow and Keras using the Tox21 10k library. We performed high-throughput and decreased computational costs using the improved DeepSnap-DL system by optimizing the parameters in DeepSnap. Consequently, we obtained that the improved DeepSnap-DL system would be a powerful advanced QSAR system on toxicological and biochemical/cheminformatic fields."} +{"text": "In response to COVID-19, the Government of Ethiopia has been taking a series of policy actions beyond public health initiatives alone. Therefore, this study was aimed to assess the applicability of basic preventive measures of the pandemic COVID-19 and associated factors among the residents of Guraghe Zone from 18th to 29th September, 2020. Systematic random sampling method was applied among the predetermined 634 samples. Variables which had p-value less than 0.25 in bivariate analysis were considered as candidate for multivariable logistic regression model. P-value <0.05 was used as a cutoff point to determine statistical significance in multiple logistic regressions for the final model.Community based cross sectional study was conducted at Guraghe Zone from 18In this study, 17.7% of the respondents apply the basic preventive measures towards the prevention of the pandemic COVID-19. In addition, being rural resident , being studied grade 1\u20138 , being a farmer , currently not married , having family size 1-3, have no diagnosed medical illness and having poor knowledge were factors which are statistically significant in multivariable logistic regression model.Despite the application of preventive measures and vaccine delivery, the applicability of the pandemic COVID-19 preventive measures was too low, which indicate that the Zone is at risk for the infection. Rural residents, those who have lower educational level, farmers, non-marrieds, those who have lower family size, those who have diagnosed medical illnesses and those who have poor knowledge were prone to the infection with the pandemic COVID-19 due to the lower practice of applying the basic preventive measures. In addition, awareness creation should be in practice at all levels of the community especially lower educational classes and rural residents. The pandemic of coronavirus disease 2019 (COVID-19) started in December 2019 in Wuhan, China Reviewers' comments:Reviewer's Responses to QuestionsComments to the Author1. Is the manuscript technically sound, and do the data support the conclusions?The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1:\u00a0YesReviewer #2:\u00a0YesReviewer #3:\u00a0No**********2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1:\u00a0NoReviewer #2:\u00a0YesReviewer #3:\u00a0No**********3. Have the authors made all data underlying the findings in their manuscript fully available?PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data\u2014e.g. participant privacy or use of data from a third party\u2014those must be specified.The Reviewer #1:\u00a0YesReviewer #2:\u00a0YesReviewer #3:\u00a0Yes**********4. Is the manuscript presented in an intelligible fashion and written in standard English?PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.Reviewer #1:\u00a0NoReviewer #2:\u00a0YesReviewer #3:\u00a0Yes**********5. Review Comments to the AuthorPlease use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. Reviewer #1:\u00a01. The manuscript has a lot of typo errors grammatical, spelling, punctuation, and consistency in word usage like; COVID-19, COVID-2019 and COVID 19, Guraghe zone and Gurage zone, dyed instead of died.2. High knowledge: If the respondent answers 11 of the 14 knowledge assessment questions correctly (10). What if the respondent answer 12, 13, and 14 of 14 knowledge assessment questions correctly????3. Moderate knowledge: If the respondent answers nine of the 14 knowledge assessment questions correctly (10). What if the respondent answers 10 of 14 knowledge assessment questions correctly????4. Poor or low knowledge: If the respondent responds below nine knowledge assessment questions correctly (10).5. Good knowledge: If the respondent responds above nine knowledge assessment questions correctly6. How did you measure attitude is not operationalized7. The maximum total score ranged from 0\u201313, with a higher score indicating better knowledge about COVID-19. How can it be the maximum score of 13 if one individual answers all 14 questions correctly the maximum score will be 14? So, how do you justify it?8. Data were cleaned, edited, coded, and entered into Epi-data version 3.1 and exported to SPSS version 25 for Windows. How can you cleaned, edited, and coded before data entry?9. Television is not social media?10. How do you manage the multicollinearity between being a rural residence and a lower educational level?11. Should supply the basic preventive measures such as mask and sanitizers for financially poor individuals: You didn\u2019t assess the availability of these preventive accessories so how can you recommend.12. Those independent variables that had p-value less than 0.25 in bivariate analysis were entered in to the multivariable logistic regressions model. What is your justification to use 0.25 as cut off points?Reviewer #2:\u00a0In this manuscript, the authors conducted a community based cross-sectional study to assess The applicability of basic preventive measures of the pandemic COVID-19 and associated factors among residents in Guraghe Zone. The study is well written, is easy to follow and covers a hot topic, but some issues should be improved before publication.Comments1.The study is well thought off. I believed that the topic and the content of the manuscript was different. So, it will be advisable to modify the title like KAP2.In the method section, replace Method by Method and material3.There are many language mistakes, please revisit the manuscript for correction.4. Reference for your operational definition?5.Please give some explanations about the current availability of the vaccine6.Please complete all necessary information on the title of each Table7. Discussion section: Will be useful to the reader to add some interesting recent literature about the updates against COVID-19 outbreak and related tools to counteract the same8.Used very few reference, which results poor interpretation of your result, please use the following reference 9.Conclusion Section: The paragraph requires a general revision to eliminate redundant sentences and please refine and don\u2019t repeat it in the abstract part.Reviewer #3:\u00a0major3 why use design effect 1.5? , is there scientifically recommended to use design effect 1.5minor1, there is sentence and paragraph without reference on introduction part2, study are description part, please use recent data not more than 5 years3, the sampling procedure is multistage , it is better represent graphically that makes easily understand to readers4. on table 5 there is missing data , please incorporate this data5, it is better to exclude those data are not significant on multi-variate regression6, please include the p-value for those factors that are significant at multi-variant**********what does this mean?). If published, this will include your full peer review and any attached files.6. PLOS authors have the option to publish the peer review history of their article digital diagnostic tool,\u00a0AttachmentPONE-D-21-05518_reviewer comment.pdfSubmitted filename: Click here for additional data file. 16 Apr 2021Response to the Reviewers1. Is the manuscript technically sound, and do the data support the conclusions?The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.Reviewer #1: YesReviewer #2: YesReviewer #3: NoResponse; Thank you and certain revision were made.________________________________________2. Has the statistical analysis been performed appropriately and rigorously?Reviewer #1: NoReviewer #2: YesReviewer #3: NoResponse: Thank you and it was revised in detail.________________________________________3. Have the authors made all data underlying the findings in their manuscript fully available?The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data\u2014e.g. participant privacy or use of data from a third party\u2014those must be specified.Reviewer #1: YesReviewer #2: YesReviewer #3: YesResponse: Thank you________________________________________4. Is the manuscript presented in an intelligible fashion and written in standard English?PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.Reviewer #1: NoReviewer #2: YesReviewer #3: YesResponse: Thank you and it was revised by a language expert. ________________________________________5. Review Comments to the AuthorPlease use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. Response: It was revised and we have responded point by point for each raised concerns and corrected as highlighted in the revised version.Reviewer #1: 1. The manuscript has a lot of typo errors grammatical, spelling, punctuation, and consistency in word usage like; COVID-19, COVID-2019 and COVID 19, Guraghe zone and Gurage zone, dyed instead of died.\u2022 Response: Thank you. All the inconsistencies were resolved and the whole manuscript was revised by language expert. ________________________________________2. High knowledge: If the respondent answers 11 of the 14 knowledge assessment questions correctly (10). What if the respondent answer 12, 13, and 14 of 14 knowledge assessment questions correctly????Response: Thank you and it was revised and it was to mean at least 11 of the 14 assessment questions. ________________________________________3. Moderate knowledge: If the respondent answers nine of the 14 knowledge assessment questions correctly (10). What if the respondent answers 10 of 14 knowledge assessment questions correctly????\u2022 Response: It was revised and which was to mean at least nine of the knowledge assessment questions. ________________________________________4. Poor knowledge: If the respondent responds below nine knowledge assessment questions correctly (10).\u2022 Response: It was revised as:- \u201cPoor knowledge: If the respondent responds < 8 knowledge assessment questions correctly\u201d. ________________________________________5. Good knowledge: If the respondent responds above nine knowledge assessment questions correctly\u2022 Response: It was revised as: - \u201cGood knowledge: If the respondent responds > 9 knowledge assessment questions correctly\u201d.________________________________________6. How did you measure attitude is not operationalized\u2022 Response: It was mentioned at the end of data collection tool and procedure but no we have stated at the subtitle, operational definition. ________________________________________7. The maximum total score ranged from 0\u201313, with a higher score indicating better knowledge about COVID-19. How can it be the maximum score of 13 if one individual answers all 14 questions correctly the maximum score will be 14? So, how do you justify it?\u2022 Response: Thank you. It was a typing error, which was to mean 14. If a respondent answers all the knowledge assessment questions correctly, stated as scored 14 of the 14 questions. ________________________________________8. Data were cleaned, edited, coded, and entered into Epi-data version 3.1 and exported to SPSS version 25 for Windows. How can you cleaned, edited, and coded before data entry?Response: The statement was rephrased and stated as:- \u201cData were entered into Epi-data version 3.1 and exported to SPSS version 25 for Windows, then cleaned, edited, coded and exploratory data analysis was carried out to check the levels of missing values, presence of influential outliers, multi-co linearity\u201d.________________________________________9. Television is not social media?Response: It was revised and which was to mean mass media.________________________________________10. How do you manage the multicollinearity between being a rural residence and a lower educational level?Response: All the variables were checked for multicholinearity but multicholinearity was not existed. Therefore; no any management is required unless multicholinearity was observed. ________________________________________11. Should supply the basic preventive measures such as mask and sanitizers for financially poor individuals: You didn\u2019t assess the availability of these preventive accessories so how can you recommend.Response: It was missed during manuscript preparation and incorporated now. This study was conducted as a baseline for further studies and for conducting community service across the study area. This study was presented within the university and community service was delivered through delivering the sanitizers, mask and health information dissemination. ________________________________________12. Those independent variables that had p-value less than 0.25 in bivariate analysis were entered in to the multivariable logistic regressions model. What is your justification to use 0.25 as cut off points?Response: In this study, we had used 12 independent variables. If we had sufficient variables, we can minimize the cutoff point but if the number of variables were not much we can increase the cutoff point. In addition; increasing the cutoff point will keep the marginally significant variables. Therefore; we have used the cutoff point 0.25 to select the candidate variable for multivariable analysis. ________________________________________Reviewer #2: In this manuscript, the authors conducted a community based cross-sectional study to assess The applicability of basic preventive measures of the pandemic COVID-19 and associated factors among residents in Guraghe Zone. The study is well written, is easy to follow and covers a hot topic, but some issues should be improved before publication.1) The study is well thought off. I believed that the topic and the content of the manuscript was different. So, it will be advisable to modify the title like KAPResponse: It was not aimed to investigate the knowledge and attitude but both of them were independent variables. They were described and as independent factors associated with the applicability of the basic preventive measures of COVID-19. As you have seen in the introduction part, it was focused to show the gap in the applicability of the basic preventive measures. In addition, as you have seen the outcome variable is the applicability of the basic preventive measures of the pandemic COVID-19, not KAP. ________________________________________2) In the method section, replace Method by Method and materialResponse: It was replaced accordingly.________________________________________3) There are many language mistakes, please revisit the manuscript for correction.Response: The whole manuscript was revised by a language expert. ________________________________________4) Reference for your operational definition?Response: Thank you all the operational definitions were cited. ________________________________________5) Please give some explanations about the current availability of the vaccineResponse; It was incorporated at the introduction section.________________________________________6) Please complete all necessary information on the title of each TableResponse: Thank you. It was revised and all the necessary information was incorporated. ________________________________________7) Discussion section: Will be useful to the reader to add some interesting recent literature about the updates against COVID-19 outbreak and related tools to counteract the sameResponse: Certain updated recent articles were cited. ________________________________________8) Used very few reference, which results poor interpretation of your result, please use the following reference Response: Thank you. All of them were cited. ________________________________________9) Conclusion Section: The paragraph requires a general revision to eliminate redundant sentences and please refine and don\u2019t repeat it in the abstract part.Response: The conclusion was revised and certain amendments were made.________________________________________Response to Reviewer #3Why use design effect 1.5? , is there scientifically recommended to use design effect 1.5\u2022 Response: Design effect is determined by the researcher in considering the heterogeneity of the population. Most researchers use design effect 2 but it is possible to use also 1.5. We have decided to use it 1.5 in considering, the heterogeneity of the populations and the cost that we have afford. ________________________________________Minor1. There is sentence and paragraph without reference on introduction partResponse: Thank you. All the statements were cited. ________________________________________2. On table 5 there is missing data , please incorporate this dataResponse: It was not a missing data, which was for the variables which were not statistically significant in multivariable analysis but to resolve this, we have the table in to two (table 5 & 6). ________________________________________3. It is better to exclude those data are not significant on multi-variate regressionResponse: It was revised and removed. ________________________________________4. Please include the p-value for those factors that are significant at multi-variantResponse: Thank you, it was included. ________________________________________AttachmentResponse to reviewers.docxSubmitted filename: Click here for additional data file. 15 Jun 2021PONE-D-21-05518R1The applicability of basic preventive measures of the pandemic COVID-19 and associated factors among residents in Guraghe ZonePLOS ONEDear Dr. Dessu,Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE\u2019s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.Please address second reviewer concern and rephrase sentences in the abstract and conclusion to minimize repetition.\u00a0plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.Please submit your revised manuscript by Jul 30 2021 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at Please include the following items when submitting your revised manuscript:A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at\u00a0https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see:\u00a0We look forward to receiving your revised manuscript.Kind regards,Enamul KabirAcademic EditorPLOS ONEJournal Requirements:Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article\u2019s retracted status in the References list and also include a citation and full reference for the retraction notice.[Note: HTML markup is below. Please do not edit.]Reviewers' comments:Reviewer's Responses to QuestionsComments to the Author1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the \u201cComments to the Author\u201d section, enter your conflict of interest statement in the \u201cConfidential to Editor\u201d section, and submit your \"Accept\" recommendation.Reviewer #1:\u00a0All comments have been addressedReviewer #2:\u00a0All comments have been addressedReviewer #3:\u00a0All comments have been addressed**********2. Is the manuscript technically sound, and do the data support the conclusions?The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1:\u00a0YesReviewer #2:\u00a0YesReviewer #3:\u00a0Yes**********3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1:\u00a0YesReviewer #2:\u00a0YesReviewer #3:\u00a0Yes**********4. Have the authors made all data underlying the findings in their manuscript fully available?PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data\u2014e.g. participant privacy or use of data from a third party\u2014those must be specified.The Reviewer #1:\u00a0YesReviewer #2:\u00a0YesReviewer #3:\u00a0Yes**********5. Is the manuscript presented in an intelligible fashion and written in standard English?PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.Reviewer #1:\u00a0YesReviewer #2:\u00a0YesReviewer #3:\u00a0Yes**********6. Review Comments to the AuthorPlease use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. Reviewer #1:\u00a0Line space in abstract part is not consistent. So make it consistent the line space for the conclusion part is not consistent with the other parts`Reviewer #2:\u00a0The author addresses all comments, however, still they did not understood one of my comment regarding redundancy of sentence the conclusion in both the abstract and main body. Therefore, please rephrase and rewrite the conclusion to minimize repetitionReviewer #3:\u00a0(No Response)**********what does this mean?). If published, this will include your full peer review and any attached files.7. PLOS authors have the option to publish the peer review history of their article digital diagnostic tool,\u00a0 15 Jun 2021Both concdrns were addressed and corrected.AttachmentResponse to reviewers.docxSubmitted filename: Click here for additional data file. 21 Jul 2021PONE-D-21-05518R2The applicability of basic preventive measures of the pandemic COVID-19 and associated factors among residents in Guraghe ZonePLOS ONEDear Dr. Dessu,Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE\u2019s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.One of the reviewers raised some minor issues those need to be fixed before taking final decision.plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.Please submit your revised manuscript by Sep 04 2021 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at Please include the following items when submitting your revised manuscript:A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at\u00a0https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see:\u00a0We look forward to receiving your revised manuscript.Kind regards,Enamul KabirAcademic EditorPLOS ONEJournal Requirements:Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article\u2019s retracted status in the References list and also include a citation and full reference for the retraction notice.Additional Editor Comments (if provided):[Note: HTML markup is below. Please do not edit.]Reviewers' comments:Reviewer's Responses to QuestionsComments to the Author1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the \u201cComments to the Author\u201d section, enter your conflict of interest statement in the \u201cConfidential to Editor\u201d section, and submit your \"Accept\" recommendation.Reviewer #1:\u00a0All comments have been addressedReviewer #2:\u00a0All comments have been addressedReviewer #3:\u00a0All comments have been addressed**********2. Is the manuscript technically sound, and do the data support the conclusions?The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1:\u00a0YesReviewer #2:\u00a0YesReviewer #3:\u00a0Yes**********3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1:\u00a0YesReviewer #2:\u00a0YesReviewer #3:\u00a0Yes**********4. Have the authors made all data underlying the findings in their manuscript fully available?PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data\u2014e.g. participant privacy or use of data from a third party\u2014those must be specified.The Reviewer #1:\u00a0YesReviewer #2:\u00a0YesReviewer #3:\u00a0Yes**********5. Is the manuscript presented in an intelligible fashion and written in standard English?PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.Reviewer #1:\u00a0YesReviewer #2:\u00a0YesReviewer #3:\u00a0Yes**********6. Review Comments to the AuthorPlease use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. Reviewer #1:\u00a0CommentsThe authors have carefully addressed almost all the issues raised by reviewers in the first review process. However, the additional comments below should be addressed to enhance the quality of the paper.On Abstract, data processing and analysis, and result session better to address these comments which are highlighted in the main manuscript:\u2022 P-value \u22640.05 better to change <0.05\u2022 All prevalence, proportion, and magnitude are better to be in one decimal point including its confidence intervals. For instance, 21.9% of the respondents have good knowledge, 94.2% had a favorable attitude, and 17.7% apply basic preventive measures \u2026\u2026\u2026\u2026\u2022 The odds ratio and its respective confidence interval better to be in two decimal points. For example, \u2022 As a principle; the prevalence, proportion, magnitude, odds, and its respective confidence interval should have a similar decimal points which means one decimal point for prevalence, proportion, and magnitude and two decimal points for odds.\u2022 All the recommendations have no owner it doesn\u2019t tell anything about for whom you are going to recommend. So, it is better to indicate the specific stakeholders for each of your recommendations.Reviewer #2:\u00a0The author addressed all comments provided by me and all other reviewers. Therefore, I confirmed that it is accepted.Reviewer #3:\u00a0the author addressed all comments adequately and proper statically analysis and technically good and I recommend for publication**********what does this mean?). If published, this will include your full peer review and any attached files.7. PLOS authors have the option to publish the peer review history of their article digital diagnostic tool,\u00a0AttachmentPONE-D-21-05518_R2 with comments.pdfSubmitted filename: Click here for additional data file.AttachmentComments to Autor.docxSubmitted filename: Click here for additional data file. 22 Jul 2021Response to reviewers\u2022 P-value \u22640.05 better to change <0.05 Response: It was revised and corrected as \u201cP-value <0.05 was used as a cutoff point to determine statistical significance in multiple logistic regressions for the final model\u201d. \u2022 All prevalence, proportion, and magnitude are better to be in one decimal point including its confidence intervals. For instance, 21.9% of the respondents have good knowledge, 94.2% had a favorable attitude, and 17.7% apply basic preventive measures \u2026\u2026\u2026\u2026Response: It was revised and corrected all over the manuscript as:- \u201cIn this study, 17.7% of the respondents apply the basic preventive measures towards the prevention of the pandemic COVID-19\u201d. \u2022 The odds ratio and its respective confidence interval better to be in two decimal points. For example, o Response: It was revised and corrected as:- \u201cIn addition, being rural resident , being studied grade 1-8 , being a farmer , currently not married , having family size 1-3, have no diagnosed medical illness and having poor knowledge were factors which are statistically significant in multivariable logistic regression model\u201d.\u2022 As a principle; the prevalence, proportion, magnitude, odds, and its respective confidence interval should have a similar decimal points which means one decimal point for prevalence, proportion, and magnitude and two decimal points for odds. o Response: It was revised and corrected as per your recommendations.\u2022 All the recommendations have no owner it doesn\u2019t tell anything about for whom you are going to recommend. So, it is better to indicate the specific stakeholders for each of your recommendations.o Response: It was revised and the recommendations were given for the specific bodies.AttachmentResponse to reviewers.docxSubmitted filename: Click here for additional data file. 11 Aug 2021The applicability of basic preventive measures of the pandemic COVID-19 and associated factors among residents in Guraghe ZonePONE-D-21-05518R3Dear Dr. Dessu,We\u2019re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.Within one week, you\u2019ll receive an e-mail detailing the required amendments. When these have been addressed, you\u2019ll receive a formal acceptance letter and your manuscript will be scheduled for publication.http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at onepress@plos.org.If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they\u2019ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact Kind regards,Enamul KabirAcademic EditorPLOS ONEAdditional Editor Comments :Reviewers' comments:Reviewer's Responses to QuestionsComments to the Author1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the \u201cComments to the Author\u201d section, enter your conflict of interest statement in the \u201cConfidential to Editor\u201d section, and submit your \"Accept\" recommendation.Reviewer #1:\u00a0All comments have been addressedReviewer #2:\u00a0All comments have been addressed**********2. Is the manuscript technically sound, and do the data support the conclusions?The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1:\u00a0YesReviewer #2:\u00a0Yes**********3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1:\u00a0YesReviewer #2:\u00a0Yes**********4. Have the authors made all data underlying the findings in their manuscript fully available?PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data\u2014e.g. participant privacy or use of data from a third party\u2014those must be specified.The Reviewer #1:\u00a0YesReviewer #2:\u00a0Yes**********5. Is the manuscript presented in an intelligible fashion and written in standard English?PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.Reviewer #1:\u00a0YesReviewer #2:\u00a0Yes**********6. Review Comments to the AuthorPlease use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. Reviewer #1:\u00a0All comments have been addressed; The language, typographic errors, analysis, and overall the paper addressed scientific writeup formats. And all my concerns are addressed and it is fit to be accepted for possible publication.Reviewer #2:\u00a0All comments are well addressed and incorporated in the main body of the manuscript. So, please do not submit again more for revision**********what does this mean?). If published, this will include your full peer review and any attached files.7. PLOS authors have the option to publish the peer review history of their article (If you choose \u201cno\u201d, your identity will remain anonymous but your review may still be made public.Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.Reviewer #1:\u00a0NoReviewer #2:\u00a0No 16 Aug 2021PONE-D-21-05518R3 The applicability of basic preventive measures of the pandemic COVID-19 and associated factors among residents in Guraghe Zone Dear Dr. Dessu:I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department. onepress@plos.org.If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact plosone@plos.org. If we can help with anything else, please email us at Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staffon behalf ofDr. Enamul Kabir Academic EditorPLOS ONE"} +{"text": "Nature Communications 10.1038/s41467-021-25534-2, published online 6 September 2021.Correction to: In the original PDF version of this Article, there was an error in the code within the 'Methods' subsection \u2018scETM software\u2019. The original text read:\u201cfrom scETM import scETM,UnsupervisedTrainermodel = scETMtrainer = UnsupervisedTrainertrainer.trainmodel.get_all_embeddings_and_nll(adata)\u201dThe correct format is:\u201cfrom scETM import scETM, UnsupervisedTrainermodel = scETMtrainer = UnsupervisedTrainertrainer.trainmodel.get_all_embeddings_and_nll(adata)\u201dThis has been corrected in the PDF version of the Article; the HTML version was correct at the time of publication."} +{"text": "Manihot esculenta) is an important clonally propagated food crop in tropical and subtropical regions worldwide. Genetic gain by molecular breeding has been limited, partially because cassava is a highly heterozygous crop with a repetitive and difficult-to-assemble genome.Cassava differences. ASE bias was often tissue specific and inconsistent across different tissues. Direction-shifting was observed in <2% of the ASE transcripts. Despite high gene synteny, the HiFi genome assembly revealed extensive chromosome rearrangements and abundant intra-genomic and inter-genomic divergent sequences, with large structural variations mostly related to LTR retrotransposons. We use the reference-quality assemblies to build a cassava pan-genome and demonstrate its importance in representing the genetic diversity of cassava for downstream reference-guided omics analysis and breeding.Here we demonstrate that Pacific Biosciences high-fidelity (HiFi) sequencing reads, in combination with the assembler hifiasm, produced genome assemblies at near complete haplotype resolution with higher continuity and accuracy compared to conventional long sequencing reads. We present 2 chromosome-scale haploid genomes phased with Hi-C technology for the diploid African cassava variety TME204. With consensus accuracy >QV46, contig N50 >18 Mb, BUSCO completeness of 99%, and 35k phased gene loci, it is the most accurate, continuous, complete, and haplotype-resolved cassava genome assembly so far. The phased and annotated chromosome pairs allow a systematic view of the heterozygous diploid genome organization in cassava with improved accuracy, completeness, and haplotype resolution. They will be a valuable resource for cassava breeding and research. Our study may also provide insights into developing cost-effective and efficient strategies for resolving complex genomes with high resolution, accuracy, and continuity. Manihot esculenta, NCBI:txid3983) genome has a haploid genome size \u223c750 Mb dCTP) probes. DNA was extracted from individual clones using Nucleobond Xtra midi kit (Macherey-Nagel) and used for PacBio library preparation by the French Plant Genomic Resources Center (CNRGV) of the French National Research Institute for Agriculture, Food and Environment (INRAE). PacBio sequencing was performed on the Sequel II system with a movie time of 30 hours with 120-min pre-extension step by Gentyane Genomic Platform (INRAE). Circular consensus sequence (CCS) reads per BAC clone were generated using SMRT Analysis Software SMRT Link v9.0.0 and FastQ Screen v0.11.1 , respectively. The technical quality of PacBio raw data was checked using the \u201cQC module\u201d in the PacBio SMRT Analysis Software SMRT Link version 8.0 . Iso-Seq reads were clustered into high-quality transcripts using the \u201cIso-Seq Analysis\u201d Application in PacBio SMRT Analysis Software (SMRT Link v10.1.0.119588). The technical quality of Hi-C data was checked using HiCUP v0.8.0 [The technical quality and potential sample contamination in Illumina PE reads were evaluated using FastQC v0.11.8 .k-mers in the Illumina PE reads using Preqc in SGA v0.10.15 [Genome complexities such as repeat content and the level of heterozygosity were evaluated with _001982) ,54. Anal_001982) , cassavaRRID:SCR_016089) [RRID:SCR_021069) [RRID:SCR_015880) [PacBio CLR reads were assembled using Falcon in pb-as_016089) , hifiasm_021069) , and HiCRRID:SCR_001228) [k-mer\u2013based method Merqury (v1.1) [Assembly statistics were collected using QUAST v4.5 . NG50 [5_001228) was calck-mer analysis results were first used to compute false duplication rates, where k-mers that appeared more than twice in each haploid assembly were used to identify artificial duplications. PacBio CLR reads were then aligned to each haploid genome and the coverage was analyzed using Asset software [For evaluation of structural accuracy, Merqury software . AssemblRRID:SCR_015008) completeness of single-copy orthologs discovered in plants (Viridiplantae Odb10) [RRID:SCR_018550) [Functional completeness was measured using BUSCO v5 , and alie Odb10) were dowe Odb10) . Referen_018550) . The \u201cas_018550) were cal_018550) .RRID:SCR_011919) [Two sets of haplotype-resolved, phased contig (haplotig) assemblies were generated using hifiasm (v0.15.3) with a combination of HiFi reads and PE Hi-C reads. Haplotigs were first validated against the high-density genetic map of cassava , which c_011919) . For eacRRID:SCR_021171) [Hi-C reads were mapped back to each set of haplotigs independently using the Arima mapping pipeline and were_021171) was used_021171) . Given tRRID:SCR_015027), with dependency on TRF (v4.09) [RRID:SCR_021170) [RRID:SCR_014653) [RRID:SCR_012954) [RRID:SCR_016120) [RRID:SCR_017623) [RRID:SCR_011811) [RRID:SCR_007105) [Starting with the assembly of all resolved alleles , repeat elements were predicted using RepeatModeler v2.0.1 , RECON v_021170) , RepeatS_014653) , and Rep_016120) , LTR_Ret_017623) , Ninja (_017623) , MAFFT v_011811) , and CD-_007105) . Among t_007105) , and eacGenome annotation was first performed by transferring reference gene models from AM560 v8.1 to TME204 haplotype assemblies using liftoff (v1.6.1) . Findingab initio gene prediction was performed using AUGUSTUS [RRID:SCR_006646). Predicted protein sequences were compared against AM560 v8.1 protein sequences and Uniprot/Swiss-Prot (release 2021_03) using blastp v2.10.1+ and InterPro using InterProScan v5.52\u201386.0 . The best protein matches from AM560 v8.1, Uniprot/Swiss-Prot, and InterPro, plus Gene Ontology (GO) terms and pathways, were used to functionally annotate predicted genes.Complementary to the transferred reference gene models, _008417) with expRRID:SCR_016582) [RRID:SCR_015687) [2|FC| >\u00a02 and adjusted P < 0.00001.Transcripts annotated in TME204 H1 and H2 were pooled and de-duplicated using cd-hit-est (v4.8.1) to gener_015687) as thoseRRID:SCR_001598) comparison of H1 and H2 transcripts. A unique, bi-directional best-matched transcript pair was considered as allele A and B. Expression values for bi-allelic transcripts were a subset from the master quantification table including all resolved alleles. ASE was determined using the same package DEseq2, with adjusted P < 0.05.\u00a0ASE transcripts overlapped between tissues and haplotypes were analyzed using upsetR [For ASE, bi-allelic transcripts were identified by reciprocal blastn (v2.10.0) [For alignment-based sequence similarity analysis, the cassava reference genome AM560 v8.0 was first disassembled into contig sequences using the utility function \u201csplit_scaffold\u201d in IDBA (v1.1.3) . Each se_018171) , which r_018171) and Asse_018171) for idenRRID:SCR_018550) [For SV analysis using HiFi reads, TME204 HiFi reads were aligned to reference contigs of AM560 and TME204 haplotigs using minimap2 v2.15r905 . SVs wer_018550) . Summary_018550) .For chromosome-level comparisons, the alignment-free method smash++ (v20.04) was usedPan-genomes were constructed using minigraph (v0.15-r426) . Large SRRID:SCR_014798) [P-value cut-off set to 0.00001. The only exception was for the 141 homozygous DETs, where the P-value cut-off was set to 0.001. GO annotation of the ab initio predicted gene models were used as the background gene set.For all selected gene sets, GO enrichment analysis was performed using topGO v2.44.0 ,90 with MZ959795, MZ959796, MZ959797, and MZ959798. All supporting data and materials are available in the GigaScience GigaDB database [Raw sequencing read data from PacBio and Illumina (Hi-C and shotgun) underlying this article are available in the European Nucleotide Archive (ENA) database and can be accessed with accession No. PRJEB43673 (or ERP127652 as the secondary accession number in ENA). Assembled genome sequences of TME204 H1 and H2 are available in the NCBI database and can be accessed with accession No. PRJNA758616 and PRJNA758615, respectively. Assembled BAC clone sequences are available in the NCBI GenBank database and can be accessed with accession Nos. database .ASE: allele-specific expression; BAC: bacterial artificial chromosome; BP: biological process; bp: base pairs; CCS: circular consensus sequence; CDS: coding sequence; CLR: continuous long reads; CMD: Cassava Mosaic Diseases; CPU: central processing unit; DE: differentially expressed/differential expression; DET: differentially expressed transcript; ENA: European Nucleotide Archive; GO: gene ontology; HiFi: high-fidelity; HMW: high molecular weight; Indel: insertion and deletion; IPA: Improved Phased Assembler; kb: kilobase pairs; Mb: megabase pairs; MF: molecular function; NCBI: National Center for Biotechnology Information; numt's: nuclear mitochondrial pseudogene regions; PacBio: Pacific Biosciences; PE: paired-end; QV: quality value; RAM: root apical meristem; SAM: shoot apical meristem; SMRT: Single Molecule Real-Time; SNP: single-nucleotide polymorphism; SV: structural variation; TPM: transcript per million; VGP: the Vertebrate Genome Project.The cassava TME204 cultivar used in our study was obtained by ETH Zurich from the International Institute of Tropical Agriculture (IITA) in Nigeria in 2003 prior to the implementation of the International Treaty on Plant Genetic Resources for Food and Agriculture . TME204 The authors declare that they have no competing interests.This work was supported by the Bill & Melinda Gates Foundation (INV-008213), ETH Zurich, and the Functional Genomics Center Zurich (FGCZ). D.P. is funded by national funds through FCT under the Institutional Call to Scientific Employment Stimulus (reference CEECINST/00026/2018). W.G. is supported by a Yushan Scholarship of the Ministry of Education in Taiwan.W.Q., Y.L., A.P., R.S., and W.G. designed the study. Y.L. and C.C. prepared DNA and RNA samples for sequencing. A.P., S.G., and A.B. prepared CLR, HiFi, and Iso-Seq libraries and performed PacBio sequencing. Y.L., N.R., E.P., S.V., and M.F. generated the BAC sequences. W.Q., Y.L., P.S., D.P., and W.G. analyzed data. W.Q., Y.L., A.P., A.B., P.S., and W.G. wrote the manuscript. All authors reviewed the final manuscript before submission.giac028_GIGA-D-21-00333_Original_SubmissionClick here for additional data file.giac028_GIGA-D-21-00333_Revision_1Click here for additional data file.giac028_GIGA-D-21-00333_Revision_2Click here for additional data file.giac028_GIGA-D-21-00333_Revision_3Click here for additional data file.giac028_Response_to_Reviewer_Comments_Revision_1Click here for additional data file.giac028_Response_to_Reviewer_Comments_Revision_2Click here for additional data file.giac028_Response_to_Reviewer_Comments_Revision_3Click here for additional data file.giac028_Reviewer_1_Report_Original_SubmissionZehong Ding -- 11/11/2021 ReviewedClick here for additional data file.giac028_Reviewer_1_Report_Revision_1Zehong Ding -- 1/16/2022 ReviewedClick here for additional data file.giac028_Reviewer_2_Report_Original_SubmissionC Robin Buell -- 11/17/2021 ReviewedClick here for additional data file.giac028_Reviewer_2_Report_Revision_1C Robin Buell -- 1/16/2022 ReviewedClick here for additional data file.giac028_Supplemental_FileClick here for additional data file."} +{"text": "With the emergence of new severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) variants and the acquisition of novel mutations in existing lineages, the need to implement methods capable of monitoring viral dynamics arises. We report the emergence and spread of a new SARS-CoV-2 variant within the B.1.575 lineage, containing the E484K mutation in the spike protein (named B.1.575.2), in a region in northern Spain in May and June 2021. SARS-CoV-2-positive samples with cycle threshold values of \u226430 were selected to screen for presumptive variants using the TaqPath coronavirus disease 2019 (COVID-19) reverse transcription (RT)-PCR kit and the TaqMan SARS-CoV-2 mutation panel. Confirmation of variants was performed by whole-genome sequencing. Of the 200 samples belonging to the B.1.575 lineage, 194 (97%) corresponded to the B.1.575.2 sublineage, which was related to the presence of the E484K mutation. Of 197 cases registered in the Global Initiative on Sharing Avian Influenza Data (GISAID) EpiCoV database as lineage B.1.575.2, 194 (99.5%) were identified in Pamplona, Spain. This report emphasizes the importance of complementing surveillance of SARS-CoV-2 with sequencing for the rapid control of emerging viral variants. During the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) pandemic, several variants that were catalogued as variants of concern (VOCs) or variants of interest (VOIs) by the European Centre for Disease Prevention and Control have emerged in different countries. As of 23 June 2021, the four important lineages with evident impact on transmissibility, severity, and immunity are lineages B.1.1.7 (Alpha), B.1.351 (Beta), B.1.617.2 (Delta), and P.1 (Gamma) 14. LineagThe lineage B.1.575 emerged in the United States, and since its emergence two new sublineages have been identified. The B.1.575.1 sublineage was classified in the Phylogenetic Assignment of Named Global Outbreak (PANGO) lineage system as a Spanish sublineage of B.1.575 with spike mutations P681H, S494P, and T716I, and the B.1.575.2 sublineage, whose main characteristic is the presence of the E484K spike mutation, also originated in Spain screening and whole-genome sequencing.TC) value of \u226430. Occasionally, targeted samples are also included according to epidemiological criteria.The Microbiology Department of the Complejo Hospitalario de Navarra, which is located in Pamplona, the capital city of Navarra, Spain , is the reference laboratory of the public health system for SARS-CoV-2. Upper respiratory specimens for SARS-CoV-2 detection are routinely collected at hospitals and primary care centers and processed by commercial RT-qPCR methods. Since the end of 2020, when variant B.1.1.7 became predominant in the United Kingdom, prospective sample-based surveillance has been conducted in our community to identify novel emerging SARS-CoV-2 variants. A two-step laboratory procedure includes all positive SARS-CoV-2 samples from hospital patients and community settings with a cycle threshold (Screening of presumptive SARS-CoV-2 variants carrying the \u0394H69/\u0394V70 deletion was performed using the TaqPath coronavirus disease 2019 (COVID-19) RT-PCR kit , following the manufacturer\u2019s instructions. Then, all samples of non-B.1.1.7 variants were subjected to a second RT-qPCR assay with the TaqMan SARS-CoV-2 mutation panel (Thermo Fisher Scientific). At that time, we customized the TaqMan assay to detect SARS-CoV-2 spike protein with the N501Y, E484K, K417N, and K417T mutations. All samples were sequenced.https://www.gisaid.org), Nextstrain (https://nextstrain.org), and the PANGO lineage system (https://cov-lineages.org) on the Illumina NovaSeq 6000 system located in the public company NASERTIC, following the manufacturer\u2019s instructions. The viral lineage classifications were performed with the Global Initiative on Sharing Avian Influenza Data (GISAID) EpiCoV database (ges.org) \u201311.http://gisaid.org) under accession numbers EPI_ISL_2510533, EPI_ISL_2510585 to EPI_ISL_2510589, EPI_ISL_2510592, EPI_ISL_2516620, EPI_ISL_2516686 to EPI_ISL_2516689, EPI_ISL_2516691, EPI_ISL_2516698 to EPI_ISL_2516704, EPI_ISL_2516709, EPI_ISL_2516710 to EPI_ISL_2516715, EPI_ISL_2516717 to EPI_ISL_2516721, EPI_ISL_2516723, EPI_ISL_2516724, EPI_ISL_2516726 to EPI_ISL_2516728, EPI_ISL_2516731, EPI_ISL_2516732, EPI_ISL_2516734 to EPI_ISL_2516749, EPI_ISL_2516753, EPI_ISL_2516754, EPI_ISL_2516756, EPI_ISL_2516757, EPI_ISL_2516759 to EPI_ISL_2516761, EPI_ISL_2516855 to EPI_ISL_2516857, EPI_ISL_2516861 to EPI_ISL_2516877, EPI_ISL_2516879, EPI_ISL_2516881, EPI_ISL_2516934, EPI_ISL_2516937 to EPI_ISL_2516940, EPI_ISL_2934576 to EPI_ISL_2934590, EPI_ISL_2934594, EPI_ISL_2934597, EPI_ISL_2934599 to EPI_ISL_2934603, EPI_ISL_2934617 to EPI_ISL_2934623, EPI_ISL_2934625, EPI_ISL_2934626, EPI_ISL_2934629 to EPI_ISL_2934635, EPI_ISL_2934638 to EPI_ISL_2934640, EPI_ISL_2934642, EPI_ISL_2934645 to EPI_ISL_2934651, EPI_ISL_2934656 to EPI_ISL_2934659, EPI_ISL_2934661, EPI_ISL_2934662, EPI_ISL_2934665 to EPI_ISL_2934669, EPI_ISL_2934672, EPI_ISL_2934673, EPI_ISL_2934676 to EPI_ISL_2934681, EPI_ISL_2934684 to EPI_ISL_2934686, EPI_ISL_2934699, EPI_ISL_2934703 to EPI_ISL_2934722, EPI_ISL_2934728 to EPI_ISL_2934730, EPI_ISL_2934844, EPI_ISL_2934846 to EPI_ISL_2934848, EPI_ISL_2934852, EPI_ISL_2934853, EPI_ISL_2934897, EPI_ISL_2934905, EPI_ISL_2934906, EPI_ISL_2934910, and EPI_ISL_2934917 (B.1.575.2), EPI_ISL_1392993 and EPI_ISL_1393347 (B.1.575.1), and EPI_ISL_1622537, EPI_ISL_1622538, EPI_ISL_1622541, and EPI_ISL_1622850 (B.1.575).All genomes generated in this work were deposited in the GISAID EpiCoV database related to the B.1.575 lineage, i.e., 4 (2%) B.1.575, 2 (1%) B.1.575.1, and 194 (97%) B.1.575.2. Among the common substitutions present in these lineages, four occurred in the spike protein . All samThe first case with the B.1.575 lineage to be identified in Pamplona dates back to 20 January 2021; after that date, no other case was identified until 15 March 2021, when three isolates showing mutations common to the B.1.575 lineage were recorded. Between week 20 and week 26 of 2021, we identified 194 cases with lineage B.1.575, which had acquired another S mutation, E484K, classified in the GISAID EpiCoV and Pangolin databases as representing sublineage B.1.575.2. The first case with the B.1.575.2 lineage was identified in a sample isolated on 19 May 2021 (week 20 of 2021); the number of cases grew to 48 cases in weeks 23 and 24 and declined suddenly at the end of June due to the emergence of the Delta (B.1.617.2) variant . This vaTo determine the distribution of the SARS-CoV-2 B.1.575 lineage, we searched in the GISAID EpiCoV and PANGO lineage databases. From May to July, the lineage and sublineages of B.1.575 have increased exponentially in different countries. The B.1.575 lineage was predominant in the United States (90%), while the B.1.575.1 and B.1.575.2 sublineages dominated in Spain .The B.1.575.2 sublineage was predominant in Navarra, since 99.5% of the cases (194/197 cases) registered in the GISAID EpiCoV database were identified in this region. In contrast, we did not identify any genomes of the B.1.575 or B.1.575.1 lineage carrying the E484K mutation.In this study, we observed the emergence of lineage B.1.575.2 with the spike E484K mutation, circulating in Pamplona in association with an outbreak. Pamplona is a small city located in the north of Spain, near France, and it could serve as a spread model for other cities in the world. The new lineage displayed a low prevalence (4.10%) among SARS-CoV-2 genomes analyzed between 23 March 2020 and 30 June 2021. Still, it was already dispersed in our city and represented 97% of the B.1.575 sequences detected during that period. The E484K mutation is considered one of the most important substitutions associated with reduced antibody neutralization potency and efficacy of the SARS-CoV-2 vaccine \u201315. The \u20131\u2013Screening PCR is a useful tool for detecting mutations, mainly because of its rapidity. Future identification with this method, including new mutations characteristic of the lineage, could serve as a rapid method of variant identification. However, whole-genome sequencing remains the gold standard technique for pandemic control.To our knowledge, this is the first study that describes the emergence of the lineage B.1.575.2. This genetic variant includes a mutation in the spike protein (E484K). This SARS-CoV-2 genetic variant was discovered in Pamplona in association with an outbreak, demonstrating the importance of genetic sequencing, especially for new community outbreaks.This brief report emphasizes the importance of exhaustive surveillance for circulating variants of SARS-CoV-2, to reduce community transmission, to assess the COVID-19 vaccine effectiveness, and to prevent the emergence of more transmissible variants that could further increase the severity of the epidemic in the country."} +{"text": "G3 Genes|Genomes|Genetics, 2021, 11(3)., DOI: https://doi.org/10.1093/g3journal/jkaa029https://doi.org/10.25387/g3.12813299\u201d. This should have been \u201chttps://figshare.com/articles/figure/Supplemental_Material_for_Shweta_Basargekar_and_Ratnaparkhi_2020/12813299?file=25392737\u201d. This has now been corrected online.When this paper first published, in the data availability section, the location of supplementary data was erroneously given as \u201cfigshare DOI:"} +{"text": "This protocol tracks engaged transcription complexes across functional genomic regions demonstrated in human K562 erythroleukemia cells.Nascent RNA-sequencing tracks transcription at nucleotide resolution. The genomic distribution of engaged transcription complexes, in turn, uncovers functional genomic regions. Here, we provide analytical steps to (1) identify transcribed regulatory elements For complete details on the use and execution of this protocol, please refer to \u2022de novo genome-wideIdentification of transcribed regulatory elements \u2022Quantification of engaged transcription complexes at functional genomic regions\u2022Measuring distribution of transcription regulators across the functional genomic regions\u2022Revealing functional genomic regions from nascent transcription data de novo genome-wide, (2) quantify engaged transcription complexes at enhancers, promoter-proximal regions, divergent transcripts, gene bodies, and termination windows, and (3) measure distribution of transcription machineries and regulatory proteins across functional genomic regions. This protocol tracks engaged transcription complexes across functional genomic regions demonstrated in human K562 erythroleukemia cells.Nascent RNA-sequencing tracks transcription at nucleotide resolution. The genomic distribution of engaged transcription complexes, in turn, uncovers functional genomic regions. Here, we provide analytical steps to (1) identify transcribed regulatory elements Transcription is a fundamental process in every organism, and its coordination defines RNA synthesis in the cell. A plethora of transcription factors, cofactors, chromatin remodelers and RNA-processing complexes orchestrate the process of RNA synthesis complexes, and the three-dimensional architecture of the linear DNA molecules for PRO-seq have been reported. Here, we begin with sequenced and mapped PRO-seq reads in bed format (https://www.ncbi.nlm.nih.gov/geo/) using accession code GSE181161.Wet-lab protocols and compd format B and strd format C. The bed format C and unrd format , remappehttp://hgdownload.soe.ucsc.edu/goldenPath/. Here, the human hg38 file (2020-08-17) is downloaded in step 4 using wget (https://www.gnu.org/software/wget/).We derive functional genomic regions from the profile of active transcription and the refGene-annotated coordinates of gene transcripts. The refGene list of transcripts can be downloaded from the UCSC genome golden path: de novo using the pattern of divergent transcription (via the web interface (https://django.dreg.scigap.org/) or installed locally (https://github.com/Danko-Lab/dREG). For broad accessibility, the web-based tool is used here. Unnormalized bigWig files are generally provided with PRO-seq data. If needed, the code below converts a bed file reporting the whole read (https://www.encodeproject.org/software/bedgraphtobigwig/). The required chrSizes.txt file is a two-column data frame that contains the chromosome names and the sizes , obtainable for the appropriate genome from http://hgdownload.cse.ucsc.edu/goldenpath/.awk \u2018$6\u00a0== \"+\"\u2019 PROseq.bed | genomeCoverageBed -i stdin -3 -bg -g chrSizes.txt > PROseq_pl.bedgraphawk \u2018$6\u00a0== \"-\"\u2019 PROseq.bed | genomeCoverageBed -i stdin -3 -bg -g chrSizes.txt > PROseq_temp.bedgraphawk \u2018{$4=$4\u2217-1; print}\u2019 PROseq_temp.bedgraph > PROseq_mn.bedgraphbedgraphToBigWig PROseq_pl.bedgraph chrSizes.txt PROseq_pl.bigWigbedgraphToBigWig PROseq_mn.bedgraph chrSizes.txt PROseq_mn.bigWigEnhancers can be identified cription . In thiscription . dREG inole read C. One ofhttps://github.com/Vihervaara/ChIP-seq_analyses).Regulatory proteins coordinate every step of transcription. In this protocol, the functional genomic regions are first identified and color-coded to show enhancer regions in green, sites of divergent transcription (div) in purple, promoter-proximal regions (pp) in orange, gene body (gb) in black, region around CPS in light blue and termination window (tw) in pink A and 3B.https://www.r-project.org) and coordinates of functional genomic regions are compared to active sites of nascent transcription using bedtools ((https://bedtools.readthedocs.io/en/latest/).The data-analyses reported here are conducted in the command line environment of the Apple OS X operating system. Lists of genomic coordinates are processed in R de novo A. Subseqde novo A. Sites de novo . For simvia GitHub .Enhancer coordinates reported here are the dREG-identified sites of divergent transcription that do not overlap with any annotated promoter of a gene. Active genes, instead, are further divided into distinct regions B. A browTiming: 1\u20133 hThis section identifies transcribed regulatory elements in the investigated cell line and condition.1.https://django.dreg.scigap.org/. Log in.a.Choose dREG peak calling.b.plus strand file to the correct box.Upload the unnormalized 3\u2032-coverage bigWig c.minus strand file to the correct box.Upload the unnormalized 3\u2032-coverage bigWig plus strand and -1 for the minus strand).Please note that the bigWig files need to be unnormalized to describe your run and press Launch. The run time depends on the size of the file and available processing capacity, commonly ranging 1\u20133h.Create an account at 2.When the run is complete, download the prefix.dREG.peak.full.bed.gz file and gunzip it.3.Move the downloaded file to the working directory and rename it to a simpler form:mv \u223c/Downloads/K562_hg38.dREG.peak.full.bed\u00a0/pathToWorkingDirectory/dREGcalls_hg38_K562.bedTiming: 30\u00a0minThese steps (4\u20136) divide genes into functional regions.4.Download RefGene datafile, gunzip it and open it in R.http://hgdownload.soe.ucsc.edu/goldenPath/hg38/database/refGene.txt.gzwget -c -O hg38.refGene.txt.gz\u00a0gunzip hg38.refGene.txt.gzRrefGene\u00a0= read.table#gives the number of rows and columns in the refGene dataframedim(refGene)\u00a0#shows the first six rows of the dataframehead(refGene)\u00a0\u00a0#shows information on each dimensionstr(refGene)\u00a05.Name the refGene columns and remove unnecessary columns and chromosome entries.names(refGene)\u00a0=c#maintains chromosomes 1-22, X, Y and M.refGene\u00a0= refGene\u00a0\u00a0#drops the extra levels removed above.refGene$chr\u00a0= factor(refGene$chr)\u00a0\u00a0refGene\u00a0= refGene#look at the dataframe againhead(refGene)\u00a0\u00a06.Generate coordinates of functional regions for every annotated gene transcript.### Subset the genes based on the strand:refGene_pl\u00a0= subsetrefGene_mn\u00a0= subsetplus strand:### Genes on the # TSSrefGene_pl$TSS\u00a0= refGene_pl$txStart\u00a0# CPSrefGene_pl$CPS\u00a0= refGene_pl$txEnd\u00a0\u00a0# region of divergent transcriptionrefGene_pl$DIVs\u00a0= refGene_pl$txStart-750\u00a0refGene_pl$DIVe\u00a0= refGene_pl$txStart-251# promoter-proximal regionrefGene_pl$PPs\u00a0= refGene_pl$TSS-250\u00a0\u00a0refGene_pl$PPe\u00a0= refGene_pl$TSS+249# genebodyrefGene_pl$GBs\u00a0= refGene_pl$TSS+250\u00a0\u00a0refGene_pl$GBe\u00a0= refGene_pl$CPS-501# CPS regionrefGene_pl$CPSs\u00a0= refGene_pl$CPS-500\u00a0\u00a0refGene_pl$CPSe\u00a0= refGene_pl$CPS+499# termination windowrefGene_pl$TWs\u00a0= refGene_pl$CPS+500\u00a0\u00a0refGene_pl$TWe\u00a0= refGene_pl$CPS+10499minus strand:#### Genes on the # TSSrefGene_mn$TSS\u00a0= refGene_mn$txEnd\u00a0\u00a0# CPSrefGene_mn$CPS\u00a0= refGene_mn$txStart\u00a0\u00a0# divergent transcription regionrefGene_mn$DIVs\u00a0= refGene_mn$txEnd+251\u00a0\u00a0refGene_mn$DIVe\u00a0= refGene_mn$txEnd+750# promoter-proximal regionrefGene_mn$PPs\u00a0= refGene_mn$TSS-249\u00a0\u00a0refGene_mn$PPe\u00a0= refGene_mn$TSS+250# genebodyrefGene_mn$GBs\u00a0= refGene_mn$CPS+501\u00a0\u00a0refGene_mn$GBe\u00a0= refGene_mn$TSS-250# CPS regionrefGene_mn$CPSs\u00a0= refGene_mn$CPS-499\u00a0\u00a0refGene_mn$CPSe\u00a0= refGene_mn$CPS+500# termination windowrefGene_mn$TWs\u00a0= refGene_mn$CPS-10499\u00a0\u00a0refGene_mn$TWe\u00a0= refGene_mn$CPS-500#### combine the data of plus and minus stands:refGene\u00a0= rbindrefGene$promC1\u00a0= refGene$TSS-500refGene$promC2\u00a0= refGene$TSS+500#### generate data files:write.tablewrite.table], file=\"hg38_refGenes_TSSpm500.txt\", col.names=F, row.names=F, quote=F, sep=\"\\t\")save.image\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0# saves the above entries in the Rq\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0# exits Ry\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0# answers \u2018yes\u2019 for saving the workspace imageIn the RefGene data file, the \u2018txStart\u2019 is smaller than the same transcript\u2019s \u2018txEnd\u2019. This is convenient when working with the coordinates of genes, but it leads to different columns reporting different functional sites depending on whether the plus or the minus strand encodes the gene. For example, the annotated TSS for genes on the plus strand is reported as txStart, while the TSS for genes on the minus strand is reported as txEnd. In the following steps, we add a column \u2018TSS\u2019 that reports the annotated TSS for each transcript. We then output a file that reports a 1000-nt window around the TSS. These windows of 1000 nt are intersected with the coordinates of dREG-identified sites of divergent transcription in step 7 to identify genes with active transcription and distal sites of divergent transcription, i.e., enhancers. We also obtain transcript-specific coordinates of promoter-proximal region (PP), divergent transcription (DIV), gene body (GB), CPS, and termination window (TW) according to the scheme in https://github.com/Vihervaara/functionalGenomicRegions).Please note that an R script comprising the steps from reading the refGene file in step 4 to the end of step 6 is provided in GitHub 9.Generate a new data frame in R that contains only actively transcribed genes. In essence, the \u2018refGene\u2019 data frame generated in step 6 is reduced here to contain only gene transcripts which initiate transcription. The \u2018txID\u2019 column contains an individual identification code for each transcript variant.refGeneAct\u00a0= subset10.Write files that contain the coordinates of promoter-proximal and divergent transcription regions. These coordinates were generated in step 6.write.table], file=\"ppPolII.txt\", col.names=F, row.names=F, quote=F, sep=\"\\t\")write.table], file=\"divTx.txt\", col.names=F, row.names=F, quote=F, sep=\"\\t\")11.Remove short genes before generating files with the gene body coordinates. This stage is needed to omit gene transcripts, where, due to shortness of the gene, the gene body would overlap with the promoter-proximal region (stretching to\u00a0+500 from TSS) and CPS (starting from \u2212500 from CPS) windows.shortGenes\u00a0= subsetrefGeneAct_\u00a0= subset#in K562 cells mapped against hg38, 351 active genes are removed at this stage.12.Write the files for the CPSs, gene body coordinates and transcription windows.write.table], file=\"CPS.txt\", col.names=F, row.names=F, quote=F, sep=\"\\t\")write.table], file=\"TW.txt\", col.names=F, row.names=F, quote=F, sep=\"\\t\")refGeneAct_\u00a0= subset #ensuring no negative gene body lengths remain.write.table], file=\"geneBody.txt\", col.names=F, row.names=F, quote=F, sep=\"\\t\")save.imageqyOf note: In the RefGen for hg38 file, short genes constitute 3,575 transcripts. Of these, only 351 genes were uniquely mappable, identified as \u2018Active\u2019 and, therefore, removed from our list of active gene transcripts in K562 cells.The steps 8\u201312 can also be run via an R script provided in GitHub .Timing: 20\u00a0minIn steps 13 and 14, active sites of transcription are allocated to individual functional genomic regions. The active sites of transcription are derived from the nascent transcription sequencing data.13.a.echo retaining 3prime most coordinate of the bed file\u00a0awk \u2018$6\u00a0== \"+\"\u2019 PROseq_K562_hg38.bed > tempPL.bed\u00a0awk \u2018$6\u00a0== \"-\"\u2019 PROseq_K562_hg38.bed > tempMN.bedSplit the file based on the strand of the mapped readb.\u00a0awk \u2018{$2\u00a0= $3; print}\u2019 tempPL.bed > tempPL_3p.bed\u00a0awk \u2018{$3\u00a0= $2; print}\u2019 tempMN.bed > tempMN_3p.bedActive site of transcription (3\u2032-most nt) is the coordinate given in the third column of plus strand reads and the second column for minus strand reads. In this step, the active site of transcription (single nt) will be placed both to the second and the third column.c.\u00a0cat tempPL_3p.bed tempMN_3p.bed | tr \u2018 \u2018 \u2018\\t\u2018 > temp_3p.bed\u00a0sortBed -i temp_3p.bed > PROseq_K562_hg38_3pnt.bed\u00a0rm \u2217temp\u2217The reads from plus and minus strands are combined, the data is converted to tab-delimited, and the reads sorted based on genomic coordinates. Intermediary files are removed.Generate a bed file that only contains the 3\u2032-nucleotide (active site of transcription) of each read B. In the14.Intersect the coordinates of nascent transcription with the coordinates of functional genomic regions.Note: To allocate each engaged Pol II complex only once, we use the -u option in the bedtools intersect command and sequentially allocate the coordinates of active transcription to the distinct genomic categories. In each round, two different files are generated: File 1 retains active sites that localize to the given functional category (options -u and -wa). File 2 reports the active sites that do not localize to the given functional category (controlled with option -v). The file 2 will be used in the subsequent round to ensure that each engaged Pol II is allocated only to one genomic region. The order of the intersections is: i) promoter-proximal regions, ii) sites of divergent transcription, iii) enhancers, iv) CPSs, v) gene bodies, and vi) termination windows. In this order, promoter-associated Pol II molecules are not counted into enhancer transcription. Furthermore, transcription at intragenic enhancers is allocated to enhancers instead of gene bodies. Finally, termination windows can be relatively short or extend over several kilobases are allocated to distinct functional\u00a0categories.Timing: 10\u00a0minThe code in step 15 calculates engaged Pol II molecules in each category of functional genomic regions . Engaged Pol II that does not occur in any of the categories is indicated as unannotated. Step 15 further analyses the distribution of engaged Pol II across the functional genomic regions in the dataset. Here, we only have a single sample and, therefore, focus on the distribution (proportions) of active sites across the genomic regions. With PRO-seq data, a normalization factor that accounts for differences in data handling and sequencing depth can be computed. Detailed description of normalization factors is out of the scope of this study. In brief, invariant whole-genome spike-in from a distinct organism can be added to all samples before the run-on reaction. This equal amount of foreign chromatin provides a count of nascent transcription against which the samples can be normalized . If desired, the color-code for each category of functional genomic regions can be adjusted here by changing the rgb values.awk -F \u2018\\t\u2019 -v OFS=\u2018\\t\u2019 \u2018{ $(NF+1)\u00a0=\"243,132,0\"; print }\u2019 ppPolCounts.tmp > ppPolCounts.bedawk -F \u2018\\t\u2019 -v OFS=\u2018\\t\u2019 \u2018{ $(NF+1)\u00a0=\"178,59,212\"; print }\u2019 ppDivCounts.tmp > ppDivCounts.bedawk -F \u2018\\t\u2019 -v OFS=\u2018\\t\u2019 \u2018{ $(NF+1)\u00a0=\"115,212,122\"; print }\u2019 enhancerCounts.tmp > enhancerCounts.bedawk -F \u2018\\t\u2019 -v OFS=\u2018\\t\u2019 \u2018{ $(NF+1)\u00a0=\"0,0,0\"; print }\u2019 geneBodyCounts.tmp > geneBodyCounts.bedawk -F \u2018\\t\u2019 -v OFS=\u2018\\t\u2019 \u2018{ $(NF+1)\u00a0=\"103,200,249\"; print }\u2019 CPSCounts.tmp > CPSCounts.bedawk -F \u2018\\t\u2019 -v OFS=\u2018\\t\u2019 \u2018{ $(NF+1)\u00a0=\"255,54,98\"; print }\u2019 TerminationWinCounts.tmp > TerminationWinCounts.bed18.Combine the files. Add an extra column \".\" to obtain a genome browser compatible bed file for visualization.cat ppPolCounts.bed ppDivCounts.bed enhancerCounts.bed geneBodyCounts.bed CPSCounts.bed TerminationWinCounts.bed > catRegions.tempawk -F \u2018\\t\u2019 -v OFS=\u2018\\t\u2019 \u2018{ $(NF+1)\u00a0=\".\"; print }\u2019 catRegions.temp > catRegions2.temp19.Reorganize the columns and add a header track.awk \u2018{print $1 \"\\t\" $2 \"\\t\" $3 \"\\t\" $7 \"\\t\" $5 \"\\t\" $6 \"\\t\" $2 \"\\t\" $3 \"\\t\" $8}\u2019 catRegions2.temp > catRegions3.tempawk \u2018!seen++\u2019 catRegions3.temp | sortBed > catRegions4.temptouch headerLine.txtecho track name=\"functional_genomic_regions\" itemRgb=\"On\" >> headerLine.txtcat headerLine.txt catRegions4.temp > functionalGenomicRegions.bedrm \u2217.temprm headerLine.txtNote: As mentioned above, the number of engaged Pol II complexes at the distinct genomic regions generated in this protocol originates from raw read counts of the data. This count depends on the sequencing depth. To get a normalized count of engaged Pol II, the column 4 (reporting raw count) in the \u2019functionalGenomicregions.bed\u2019 file can be multiplied with a normalization factor.The generated \u2019functionalGenomicregions.bed\u2019 file can be read in genome browsers to show the identified functional genomic regions as well as the count of engaged Pol II at each indicated region . As starting material, please use a file that reports the summit coordinate of each peak, named with '_summits.bed', for example: K562_TBP_summits.bed, K562_GATA1_summits.bed, K562_CTCF_summits.bed, K562_H3K36me3_summits.bed .Obtain or generate datasets of interest. Place them in the folder of your working directory. Here, we used TBP, GATA1, CTCF, H3K36me3, NELFe, p300 and RAD21 ChIP-seq data . We remab.for x in the code below.\u00a0In the loop, the ${x} will be replaced with the factor name, one listed factor after another.The peak summits for each given chromatin-associated factor are intersected with the\u00a0distinct genomic regions. The strategy described in step 14 is used to ensure that each identified enrichment of a factor at the genome (peak) is counted once. For efficiency, we use a loop function that takes one bed file of mapped peak summits at a time. To define which datasets are analyzed in the loop function, please place names of factors in quotation marks, separated by a space, after the c.Run the code below using the factors of your choice. Please, ensure that the files are placed in the correct folder (working directory) and that the file names correspond to the names in the code. The files listing the coordinates of functional genomic regions were generated in steps 1\u201312.Count genome-associating factors at distinct functional genomic regions.for x in \"TBP\" \"GATA1\" \"CTCF\" \"H3K36me3\"do## Factor-derived reads at promoter-proximal regionsbedtools intersect -u -wa -a K562_${x}_summits.bed -b ppPolII.txt > ${x}_K562_ppPolII.bedbedtools intersect -v -a K562_${x}_summits.bed -b ppPolII.txt > ${x}_ppRemoved.bed## Factor-derived reads at the sites of divergent transcriptionbedtools intersect -u -wa -a ${x}_ppRemoved.bed -b divTx.txt > ${x}_K562_ppDiv.bedbedtools intersect -v -a ${x}_ppRemoved.bed -b divTx.txt > ${x}_ppdivRemoved.bed## Factor-derived reads at enhancersbedtools intersect -u -wa -a ${x}_ppdivRemoved.bed -b enhancers.bed > ${x}_K562_enhancers.bedbedtools intersect -v -a ${x}_ppdivRemoved.bed -b enhancers.bed > ${x}_ppdivEnhRemoved.bed## Factor-derived reads at CPSbedtools intersect -u -wa -a ${x}_ppdivEnhRemoved.bed -b CPS.txt > ${x}_K562_CPS.bedbedtools intersect -v -a ${x}_ppdivEnhRemoved.bed -b CPS.txt > ${x}_ppdivEnhCPSRemoved.bed## Factor-derived reads at GBbedtools intersect -u -wa -a ${x}_ppdivEnhCPSRemoved.bed -b geneBody.txt > ${x}_K562_GB.bedbedtools intersect -v -a ${x}_ppdivEnhCPSRemoved.bed -b geneBody.txt > ${x}_ppdivEnhCPSgbRemoved.bed## Factor-derived reads at termination windowsbedtools intersect -u -wa -a ${x}_ppdivEnhCPSgbRemoved.bed -b TW.txt > ${x}_K562_TW.bedbedtools intersect -v -a ${x}_ppdivEnhCPSgbRemoved.bed -b TW.txt > ${x}_K562_noGene_noEnh.bedrm \u2217Removed.beddone21.a.script Factor_counts_at_functional_regions.txtInitiate the script that collects factor counts at the functional genomic regions.b.Plot the counts of rows in each intersected bed file.c.control\u00a0+ D in the terminal window.Terminate the log script by pressing Plot the counts of chromatin-associated factors at distinct categories of functional regions.22.The file \u2018Factor_peaks_at_functional_regions.txt\u2019 reports the number of ChIP-seq peaks at each functional category. ined see E and 4C.functional_genomic_regions.bed file, each block represents the coordinates of an individual functional genomic region. The block color codes for the category of the region . In UBET2 and PPP1R12B genes are shown, and two intergenic enhancer candidates identified on the longer isoform of PPP1R12B. Furthermore, the protocol counts engaged Pol II complexes that localize to each identified functional genomic region. The raw count of active sites of transcription is given in column 4 of the functional_genomic_regions.bed file, appearing above the color-coded block when visualized in a genome browser (HBE1 gene to the region\u2019s upstream locus control element (LCR). In PRO-seq used here :https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)\"/bin/bash -c \"$. Likewise, the names of the files in the scripts should be updated to match the user-specific input files.Please find the code as shell and R script files in the GitHub repository viher@kth.se).Further information and requests for resources and reagents should be directed to and will be fulfilled by the lead contact, Anniina Vihervaara (This study did not generate new unique reagents."} +{"text": "Escherichia coli (EHEC) isolates of serotype O157:H7 are serious foodborne zoonotic pathogens and prime targets for biocontrol using bacteriophages. We report on the complete genome sequences of 11 novel lytic bacteriophages, representing three viral genera, isolated from cattle in Hungary that target E. coli O157 strains.Enterohemorrhagic Escherichia coli (EHEC) isolates of serotype O157:H7 are serious foodborne zoonotic pathogens and prime targets for biocontrol using bacteriophages. We report on the complete genome sequences of 11 novel lytic bacteriophages, representing three viral genera, isolated from cattle in Hungary that target E. coli O157 strains.Enterohemorrhagic Escherichia coli (EHEC) strains, particularly of the O157:H7 serotype, are serious foodborne zoonotic pathogens, and several phages which show effective lysis of these have previously been characterized , and atypical pathotypes isolated earlier in Hungary, as well , FastQ Screen (https://www.bioinformatics.babraham.ac.uk/projects/fastq_screen), and Trimmomatic , genome sizes of 166,440 and 166,441\u2009bp, respectively, and a GC content of 35.4%. Phages vb_EcoM_bov10K2, vb_EcoM_bov11CS3, vb_EcoM_bov22_2, and vb_EcoM_bov25_3 are rV5-like phages belonging to the Vequintavirus genus of the Vequintavirinae subfamily, having a >96% nucleotide identity to the type phage rV5 (DQ832317.1). Their genome sizes were between 135,960 and 135,961\u2009bp, with a 43.7% GC content. Phages vb_EcoS_bov11C2, vb_EcoS_bov15_1, vb_EcoS_bov16_1, vb_EcoS_bov22_1, and vb_EcoS_bov25_1D represent HK578-like phages, officially the Dhillonvirus genus from the Siphoviridae family, with 86% genome coverage and >90% nucleotide identity to phage HK578 (JQ086375.1). Their genome sizes were between 44,612 and 44,747\u2009bp, and their GC content was 54.5%.Phages vb_EcoM_bov9_1 and vb_EcoM_bov10K1 proved to be T4-like phages, representing the Phages within the same genus were very uniform. The T4-like phages differed in only a 1-bp gap. Of the rV5-like phages, vb_EcoM_bov10K2, vb_EcoM_bov11CS3, and vb_EcoM_bov25_3 were 100% identical except for a 1-bp gap, while vb_EcoM_bov22_2 differed from them in only 2 single nucleotide polymorphisms (SNPs). The dhillonviruses differed at most by 11 SNPs, with a gap of 30 or 89\u2009bp. The genome of vb_EcoS_bov22_1 was assembled as a partial genome sequence, with a different start position compared to the other dhillonviruses.MT884006 through MT884015 and MT951623.The nucleotide sequences of the phages are deposited in GenBank under the accession numbers"} +{"text": "X + Y, are classic and important problems. Here, a new algorithm is presented, which generates the top k values of the form Selection and sorting the Cartesian sum, Note that this problem definition is presented w.l.o.g.; X and Y need not share the same length. Top-k is important to practical applications, such as selecting the most abundant k isotopologue peaks from a compound (k is \u2208 \u03a9(n + k), because loading the vectors is \u2208 \u0398(n) and returning the minimal k values is \u2208 \u0398(k).Given two vectors of length compound . Top-k ik can be solved trivially in n2 values of the form Xi + Yj. By using median-of-medians (O(n2) steps by generating all n2 values and performing k-selection on them.Top--medians , this cakth minimum value from X + Y in In 1982, Frederickson & Johnson introduced a method reminiscent of median-of-medians ; their m)) steps .k smallest elements from a min-heap in O(k), assuming the heap has already been built solves top-k in O(n + k). The tree data structure in Frederickson\u2019s method can be combined with a combinatoric heap to compute the kth smallest value from X + Y.Frederickson subsequently published a second algorithm, which finds the en built . Combinikth smallest value . Then, as tuple is popped from soft heap, lower-quality tuples are inserted into the soft heap. These lower-quality tuples of areKaplan et al. described an alternative method for selecting the st value ; that mest value . By heapXi + Yj , this scheme progresses in row-major order, thereby avoiding a tuple being added multiple times.In the matrix kth smallest value from X + Y, the best k values are popped from the soft heap. Even though only the minimal k values are desired, \u201ccorruption\u201d in the soft heap means that the soft heap will not always pop the minimal value; however, as a result, soft heaps can run faster than the t \u00b7 \u03b5, where t is the number of insertions into the soft heap thus far. Thus, instead of popping k items values, which must include the minimal k values, are popped. These values are post-processed to retrieve the minimal k values via linear time one-dimensional selection .To compute the election . For conk in O(n + k) steps; this is because computing the kth smallest value from X + Y pops the minimal k values from the soft heap.Note that the Kaplan et al. method easily solves top-X + Y. LOHs are stricter than heaps but not as strict as sorting: Heaps guarantee only that Xi \u2264 Xchildren(i), but do not guarantee any ordering between one child of Xi, a, and the child of the sibling of a. Sorting is stricter still, but sorting n values cannot be done faster than Xu)( = Xu). The size of these layers starts with |X(1)| = 1 and grows exponentially such that u children from layer u + 1, this can be seen as a more constrained form of the heap; however, unlike sorting, for any constant \u03b1 > 1, LOHs can be constructed \u2208 O(n) by performing iterative linear time one-dimensional selection, iteratively selecting and removing the largest layer until all layers have been partitioned. For example, 8,1,6,4,5,3,2 can be LOHified with \u03b1 = 2 into an LOH with three layers by first selecting the largest 4 values on the entire list , removing them, and then selecting the largest 2 values from the remaining 3 values .This paper uses layer-ordered heaps (LOHs) to produ\u03b1 LOHs has been necessary to demonstrate that for k values from X1 + X2 + \u22ef + Xm (where each Xi has length n) in \u03b1 is not trivial when X(1)| = |X(2)| = \u22ef = 1, indicating a sorting, which implies a runtime \u2208 \u03a9(nlog(n)) . Furthernlog(n)) .python implementation of a LOH is shown in listing 1.A k presented here makes extensive use of LOHs. It is simple to implement, does not rely on anything more complicated than linear time one-dimensional selection . Due to its simplicity and contiguous memory access, it has a fast performance in practice.The new, optimal algorithm for solving top-The algorithm presented is broken into phases. An illustration of these phases is provided in X and Y. This is performed by using linear time one-dimensional selection to iteratively remove the largest remaining layer both Xu)( and Yv)( are layers of their respective LOHs.Now layer products of the form Xu)( and Yv). The algorithm proceeds by popping the lexicographically minimum tuple from H. W.l.o.g., there is no guaranteed ordering of the form Xu)( + Yv)( \u2264 Xu(+ 1) + Yv)( + Yv) > min(Xu(+ 1) + Yv); however, lexicographically, H only after \u230a\u230b has been popped from H. Note that for this reason and to break ties where layer products contain identical values, are included in the tuple. \u2308\u2309 tuples do not insert any new tuples into H when they\u2019re popped.Binary heap u,v)\u2309 is popped from H, the index is appended to list q and the size of the layer product |Xu)( + Yv)(| = |Xu)(|\u00b7|Yv) appended to list q. s\u2032 is the total number of elements in each of these layer products appended to q during phase 2.Any remaining tuple in q are generated. A linear time one-dimensional k-selection is performed on these values and returned.The values from every element in each layer product in q must contain the minimal k values in X + Y. Thus, by performing one-dimensional k-selection on those values in phase 3, the minimal k values in X + Y are found.Lemma 2.4 proves that at termination all layer products found in Lemma 2.1.If > 1) and 1) must previously have been popped from H.Proof. There is a chain of pops and insertions backwards from u, v = 1, the lemma is true.When both u = 1 this chain is of the form W.l.o.g. if H increment either row or column, something of the form H before inserting a = u, then a < u, then from the insertion of H, until H must contain something of the form H. Because there are a finite number of these a' and they are not revisited, before Otherwise, both Lemma 2.2If .Proof. Inserting 1) and 1). These pops will insert will therefore be popped before Lemma 2.3All tuples will be visited in ascending order as they are popped from H.Proof. Let H and let a < u, b \u2264 v, or w.l.o.g. a < u, b > v. In the former case, H until In the latter case, lemma 2.1 says that H, H, then H. Each Ordering on popping with H until Identical reasoning also shows that \u25a1Thus, all tuples are popped in ascending order. Lemma 2.4At the end of phase 2, the layer products whose indices are found in q contain the minimal k values.Proof. Let be the layer product that first makes s\u2265 k. There are at least k values of X + Y that are q at the end of phase 1 can only be improved by trading some value for a smaller value, and thus require a new value 1 and v\u2032>1. By lemma 2.3, minimum and maximum layer products are popped in ascending order. By the layer ordering property of X and Y, \u25a1Lemma 2.6s, the number of elements in all layer products appended to q in phase 1, is \u2208 O(k).Proof. is the layer product whose inclusion during phase 1 in q achieves s \u2265 k; therefore, . This happens when H.If k = 1, popping .k > 1, then at least one layer index is >1: u > 1 or v > 1. W.l.o.g., let u > 1. By lemma 2.1, popping H requires previously popping Xu)( + Yv)(|\u2208 O(|Xu(\u22121) + Yv)(|). |Xu(\u22121) + Yv)( + Yv)( + Yv)(|\u2208 O(k). s \u2208 O(k). \u25a1If Lemma 2.7s\u2032, the total number of elements in all layer products appended to q in phase 2, \u2208 O(k).Proof. Each layer product appended to q in phase 2 has had u\u2032 = 1 or v\u2032 = 1 or u\u2032 > 1 and v\u2032 > 1. Each matches exactly one layer product . Because s, the count of all elements whose layer products were inserted into q in phase 1, includes Xu\u2032)( + Yv\u2032)( (the latter is appended to q during phase 2). By exponential growth of layers in X and Y, Xu\u2032(\u22121) + Yv\u2032(\u22121)| values were included in s during phase 1, and thus the total number of elements in all such layer products is \u2264 s. Thus the sum of sizes of all layer products with u\u2032 > 1 and v\u2032 > 1 that are appended to q during phase 2 is asymptotically \u2264 \u03b12\u00b7 s.First consider when u\u2032 = 1 or v\u2032 = 1, the number of elements in all layer products must be \u2208 O(n): u\u2032 = 1 or v\u2032 = 1 are \u2208 O(k):When either u\u2032>1, H only when H at any time. Furthermore, popping H requires previously popping H: layer ordering on X implies max(Xu\u2032(\u22121)) \u2264 min (Xu\u2032) and |Y(1)| = 1 implies min(Y(1)) = max(Y(1)), and so H and counted in s. By the exponential growth of layers, the contribution of all such u\u2032 > 1, v\u2032 = 1 will be u\u2032 > 1, v\u2032 = 1 or u\u2032 = 1, v\u2032 > 1 will be \u2248 \u2264 2 \u03b1 \u00b7 s.W.l.o.g. for u\u2032 = v\u2032 = 1, the layer product contains 1 element.When s\u2032, the total number of elements found in layer products appended to q during phase 2, has s\u2032\u2208 O(k). \u25a1Therefore, Theorem 2.8The total runtime of the algorithm is \u2208 O(n + k).Proof. For any constant \u03b1 > 1, LOHification of X and Y runs in linear time, and so phase 0 runs \u2208 O(n).\u03b1(n); therefore, the total number of layer products is 2\u03b1(n) elements, because each layer product may be inserted as both orO(n) runtime of phase 0.The total number of layers in each LOH is \u2248 logs\u2208 O(k). Likewise, lemma 2.7 shows that s\u2032\u2208 O(k). The number of elements in all layer products in q during phase 3 is s + s\u2032\u2208 O(k). Thus, the number of elements on which the one-dimensional selection is performed will be k-selection in phase 3 is \u2208 O(k).Lemma 2.6 shows that O(n + k + k + k) = O(n + k). \u25a1The total runtime of all phases \u2208 O(n2log(n) + k) method (chosen for reference because it is the easiest method to implement and because of the fast runtime constant on python\u2019s built-in sorting routine), the soft heap-based method from Kaplan et al., and the LOH-based method in this paper are shown in Runtimes of the naive kth best value Xi + Yj occurs. It is somewhat reminiscent of skip lists ). But unlike soft heaps, LOHs can be constructed easily using only an implementation of median-of-medians .The algorithm can be thought of as \u201czooming out\u201d as it pans through the layer products, thereby passing the value threshold at which the ip lists ; howeverk appears in the runtime formula. This is significant because the layer products in q at the end of phase 2 could be returned in their compressed form . The total runtime of phases 0\u20132 is \u2208 O(n). It may be possible to recursively perform X + Y selection on layer products Xu)( + Yv).Phase 3 is the only part of the algorithm in which X and Y are extended on the fly or where several subsequent selections are performed), whereas listing 2 could be adapted to those uses.As noted in theorem 2.8, even fully sorting all of the minimum and maximum layer products would be Phase 0 (which performs LOHification) is the slowest part of the presented python implementation; it would benefit from having a practically faster implementation to perform LOHify.The fast practical performance is partially due to the algorithm\u2019s simplicity and partially due to the contiguous nature of LOHs. Online data structures like soft heap are less easily suited to contiguous access, because they support efficient removal and therefore move pointers to memory rather than moving the contents of the memory.\u03b1 affects performance through the cost of LOHifying and the amount by which the number of generated values overshoots the k minimum values wanted: when \u03b1 \u2248 1, LOHify effectively sorts X and Y, but generates few extra values; k-selection.The choice of k values from X + Y. The new optimal algorithm presented here is faster in practice than the existing soft heap-based optimal algorithm.LOHs can be constructed in linear time and used to produce a theoretically optimal algorithm for selecting the minimal Listing 1.LayerOrderedHeap.py: A class for LOHifying, retrieving layers, and the minimum and maximum value in a layer.# https://stackoverflow.com/questions/10806303/python-implementation-of-median-of-medians-algorithmdef median_of_medians_select: # returns j-th smallest value:if len(L) < 10:L.sortreturn L[j]S = lIndex = 0while lIndex+5 < len(L)-1:S.append(L[lIndex:lIndex+5])lIndex += 5S.append(L[lIndex:])Meds = for subList in S:Meds.append-1)/2)))med = median_of_medians_select-1)/2))L1 = L2 = L3 = for i in L:if i < med:L1.append(i)elif i > med:L3.append(i)else:L2.append(i)if j < len(L1):return median_of_medians_selectelif j < len(L2) + len(L1):return L2[0]else:return median_of_medians_select-len(L2))def partition:n = len(array)right_n = n - left_n# median_of_medians_select argument is index, not size:max_value_in_left = median_of_medians_selectleft = right = for i in range(n):if array[i] < max_value_in_left:left.append(array[i])elif array[i] > max_value_in_left:right.append(array[i])num_at_threshold_in_left = left_n - len(left)left.extendnum_at_threshold_in_right = right_n - len(right)right.extendreturn left, rightdef layer_order_heapify_alpha_eq_2(array):n = len(array)if n == 0:return if n == 1:return arraynew_layer_size = 1layer_sizes = remaining_n = nwhile remaining_n > 0:if remaining_n >= new_layer_size:layer_sizes.append(new_layer_size)else:layer_sizes.append(remaining_n)remaining_n -= new_layer_sizenew_layer_size *= 2result = for i,ls in enumerate(layer_sizes[::-1]):small_vals,large_vals = partition - ls)array = small_valsresult.appendreturn result[::-1]class LayerOrderedHeap:def __init__:self._layers = layer_order_heapify_alpha_eq_2(array)self._min_in_layers = [ min(layer) for layer in self._layers ]self._max_in_layers = [ max(layer) for layer in self._layers ]#self._verifydef __len__(self):return len(self._layers)def _verify(self):for i in range(len(self)-1):assert(self.max(i) <= self.min(i+1))def __getitem__:return self._layers[layer_num]def min:assert( layer_num < len(self) )return self._min_in_layers[layer_num]def max:assert( layer_num < len(self) )return self._max_in_layers[layer_num]def __str__(self):return str(self._layers)Listing 2.CartesianSumSelection.py: A class for efficiently performing selection on X + Y in \u0398(n + k) steps.from LayerOrderedHeap import *import heapqclass CartesianSumSelection:def _min_tuple:# True for min corner, False for max cornerreturn (self._loh_a.min(i) + self._loh_b.min(j), , False)def _max_tuple:# True for min corner, False for max cornerreturn (self._loh_a.max(i) + self._loh_b.max(j), , True)def _in_bounds:return i < len(self._loh_a) and j < len(self._loh_b)def _insert_min_if_in_bounds:if not self._in_bounds:returnif not in self._hull_set:heapq.heappush)self._hull_set.add )def _insert_max_if_in_bounds:if not self._in_bounds:returnif not in self._hull_set:heapq.heappush)self._hull_set.add )def __init__:self._loh_a = LayerOrderedHeap(array_a)self._loh_b = LayerOrderedHeap(array_b)self._hull_heap = [ self._min_tuple ]# False for min:self._hull_set = { }self._num_elements_popped = 0self._layer_products_considered = self._full_cartesian_product_size = len(array_a) * len(array_b)def _pop_next_layer_product(self):result = heapq.heappop(self._hull_heap)val, , is_max = resultself._hull_set.remove )if not is_max:# when min corner is popped, push their own max and neighboring minsself._insert_min_if_in_boundsself._insert_min_if_in_boundsself._insert_max_if_in_boundselse:# when max corner is popped, do not pushself._num_elements_popped += len(self._loh_a[i]) * len(self._loh_b[j])self._layer_products_considered.append )return resultdef select:assert( k <= self._full_cartesian_product_size )while self._num_elements_popped < k:self._pop_next_layer_product# also consider all layer products still in hullfor val, , is_max in self._hull_heap:if is_max:self._num_elements_popped += len(self._loh_a[i]) * len(self._loh_b[j])self._layer_products_considered.append )# generate: values in layer products# Note: this is not always necessary, and could lead to a potentially large speedup.candidates = for val_b in self._loh_b[j] ]print / k) )k_small_vals, large_vals = partitionreturn k_small_valsListing 3.SimplifiedCartesianSumSelection.py: A simplified implementation of Listing 2. This implementation is slower when k \u226a n2; however, it has the same asymptotic runtime for any n and k: \u0398(n + k).from LayerOrderedHeap import *class SimplifiedCartesianSumSelection:def _min_tuple:# True for min corner, False for max cornerreturn (self._loh_a.min(i) + self._loh_b.min(j), , False)def _max_tuple:# True for min corner, False for max cornerreturn (self._loh_a.max(i) + self._loh_b.max(j), , True)def __init__:self._loh_a = LayerOrderedHeap(array_a)self._loh_b = LayerOrderedHeap(array_b)self._full_cartesian_product_size = len(array_a) * len(array_b)self._sorted_corners = sorted for i in range(len(self._loh_a)) for j in range(len(self._loh_b))] + [self._max_tuple for i in range(len(self._loh_a)) for j in range(len(self._loh_b))])def select:assert( k <= self._full_cartesian_product_size )candidates = index_in_sorted = 0num_elements_with_max_corner_popped = 0while num_elements_with_max_corner_popped < k:val, , is_max = self._sorted_corners[index_in_sorted]new_candidates = [ v_a+v_b for v_a in self._loh_a[i] for v_b in self._loh_b[j] ]if is_max:num_elements_with_max_corner_popped += len(new_candidates)else:# Min corners will be popped before corresponding max corner;# this gets a superset of what is needed (just as in phase 2)candidates.extend(new_candidates)index_in_sorted += 1print / k) )k_small_vals, large_vals = partitionreturn k_small_vals"} +{"text": "With the development of distributed generation and the corresponding importance of the P.V. (photovoltaic) system, it is desired to operate a P.V. system efficiently and reliably. To ensure such an operation, a monitoring system is required to diagnose the health of the system. This paper aims to analyze a P.V. system under various operating conditions to identify parameters\u2013derived from the I-V (current-voltage) characteristics of the P.V. system\u2013that could serve as electrical signatures to various faulty operations and facilitate in devising a monitoring algorithm for the system. A model-based approach has been adopted to represent a P.V. system, using a one-diode model of a practical P.V. cell, developed in MATLAB/Simulink. The modelled system comprises two arrays, while each array has two panels in series. It was simulated for various operating conditions: healthy condition represented by STC (Standard Testing Condition), O.C. (open-circuited), soiling, P.S. , H.S. (panels hotspots) and P.D. (panels degradation) conditions. For the analysis of I-V curves under these conditions, six derived parameters were selected: Vte , MCPF (maximum current point factor), Ri (currents ratio), S (slope), and Dv and Di . Using these parameters, data of the actual system under various conditions were compared with its model-generated data for healthy operating conditions. Thresholds were set for each parameter\u2019s value to mark normal operation range. It was observed that almost each considered fault creates a unique combination of sensitive parameters whose values exceeds the pre-defined thresholds, creating an electrical signature that will appear only when the corresponding conditions on the system are achieved. Based on these signatures, an algorithm has been proposed in this study which aims to identify and classify the considered faults. In comparison to other such studies, this work has been focused on those sensitive parameters for faults identification which shows greater sensitivity and contribute more to creation of unique sets of sensitive parameters for considered faults. Energy\u2014being the necessity for economic development\u2014has played an essential role in making up current-day civilization. The living standards of a country can be judged from energy consumption per person living there . Owing tNevertheless, like any other process, a P.V. system operation is vulnerable to certain limitations and faults . These lThe methodology opted in this work, to monitor a P.V. system against various contingent situations, belongs to electrical methods. It relies on the analysis of the I-V characteristics of the P.V. plant, using a model-based approach. By comparing these characteristics of an actual system with its modelled system with help of derived parameters from it, system\u2019s health could be inferenced. Such an approach has been opted in this work using One-diode model with improved parameters for modelling a P.V. system, as it is uncomplicated and accurate enough to represent a practical P.V. plant operation . The modsh) and series (Rs) resistances account for the losses in a practical P.V. cell operation. The equation that describes the One-diode model is given by Eq is charge on an electron, K.B. (1.38065\u00d710\u221223 J/K) is Boltzmann constant and A is the diode ideality factor of a P.V. cell. For several cells connected in series (Ns), to make a panel/module, the equation becomes,While observing Eq , Io is tWhere asIo, is given by Eq and short-circuit current (Iscn) at STC. Eq , and Tn (in Kelvin) represents the temperature of the panel at STC. These equations can be solved using data from the P.V. panels manufacturer datasheet along with irradiance and temperature values for any operating condition. Literature study reveals that a pyranometer at the panel\u2019s surface can estimate the irradiance very well, while temperature of the panels can be estimated using the empirical formula in Reviewers' comments:Reviewer's Responses to Questions Comments to the Author1. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1:\u00a0PartlyReviewer #2:\u00a0Yes********** 2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1:\u00a0I Don't KnowReviewer #2:\u00a0Yes********** 3. Have the authors made all data underlying the findings in their manuscript fully available?PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data\u2014e.g. participant privacy or use of data from a third party\u2014those must be specified. The Reviewer #1:\u00a0YesReviewer #2:\u00a0Yes********** 4. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.Reviewer #1:\u00a0YesReviewer #2:\u00a0Yes********** 5. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. Reviewer #1:\u00a01. There are places in the article that do not indicate which equation to cite, such as the paragraph above Equation 7.2. What does P.S. (3.1) mean in the descriptive paragraph of Figure 6?3. In the analysis of Figure 6, it is mentioned that four different regions can be set for values beyond the predefined thresholds, but it is not clearly described what the four regions are and how to define the boundary between the four regions.4. Appropriately add some relevant descriptions on how to determine the threshold range.5. Add some comparisons with other scholars' work on electrical methods.6. Further elaborate on the specific advantages or innovations of this method.Reviewer #2:\u00a01. The introduction of this submission should be improved further. The key differences between your work and previous studies should be clarified. A point-to-point way is recommended to state the main contributions.2. PV modeling section is very traditional, and this reviewer suggests this section can be shorten properly.3. The writing style of this work likes a technical report, but not a scientific paper.4. A comparison of your method and other work should be carried out.5. Most of the figures have poor quality, such as, lacking of unit.********** 6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.If you choose \u201cno\u201d, your identity will remain anonymous but your review may still be made public.Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1:\u00a0NoReviewer #2:\u00a0Nohttps://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at\u00a0figures@plos.org. Please note that Supporting Information files do not need this step.While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool,\u00a0 26 Sep 2021Editor\u2019s comments,Comments to the AuthorThank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE\u2019s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. We look forward to receiving your revised manuscript.1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found athttps://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdfAuthor\u2019s response: The corresponding author, on behalf of all the co-authors, would like to thank the editor for the positive feedback and encouragement. The revised version of the manuscript has been updated according to the guidelines provided by the PLOS ONE style requirements. ________________________________________2. Thank you for stating the following financial disclosure: Author\u2019s response: The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.________________________________________a) Please clarify the sources of funding for your study. List the grants or organizations that supported your study, including funding received from your institution. Author\u2019s response: No funding or any financial grant has been received for conducting this study. ________________________________________b) State what role the funders took in the study. If the funders had no role in your study, please state:Author\u2019s response: The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript________________________________________c) If any authors received a salary from any of your funders, please state which authors and which funders.Author\u2019s response: The authors received no specific funding for this work. ________________________________________http://journals.plos.org/plosone/s/data-availability#loc-unacceptable-data-access-restrictions. 3. We note that you have indicated that data from this study are available upon request. PLOS only allows data to be available upon request if there are legal or ethical restrictions on sharing data publicly. For more information on unacceptable data access restrictions, please see In your revised cover letter, please address the following prompts:a) If there are ethical or legal restrictions on sharing a de-identified data set, please explain them in detail and who has imposed them . Please also provide contact information for a data access committee, ethics committee, or other institutional body to which data requests may be sent.Author\u2019s response: There are no ethical restrictions on sharing a de-identified data set. ________________________________________http://journals.plos.org/plosone/s/data-availability#loc-recommended-repositories.b) If there are no restrictions, please upload the minimal anonymized data set necessary to replicate your study findings as either Supporting Information files or to a stable, public repository and provide us with the relevant URLs, DOIs, or accession numbers. For a list of acceptable repositories, please see Author\u2019s response: The data set is uploaded in the review as a Supporting Information file. ________________________________________4. Please amend your list of authors on the manuscript to ensure that each author is linked to an affiliation.Author\u2019s response: The list of authors is provided with affiliation in the given order, which is requested to be considered in the revised version of the manuscript. i. Muhammad Adnan Khan1, Affiliation: Research Associate, Center for Advanced Studies in Energy University of Engineering and Technology, Phase 5, opposite to Sui Northern Gas Pipeline office, Postal address 25000, Peshawar. ii. Khalid Khan2* Affiliation: Research Associate, Center for Advanced Studies in Energy University of Engineering and Technology, Phase 5, opposite to Sui Northern Gas Pipeline office, Postal address 25000, Peshawar.iii. Adnan Daud Khan3*Affiliation: Dean Faculty Renewable Energy, Center for Advanced Studies in Energy University of Engineering and Technology, Phase 5, opposite to Sui Northern Gas Pipeline office, Postal address 25000, Peshawar.iv. Zubair Ahmad Khan4Affiliation: Professor at Department of Mechatronics, University of Engineering and Technology, Phase 5, opposite to Sui Northern Gas Pipeline office, Postal address 25000, Peshawar.v. Shahbaz Khan5Affiliation: Lab Engineer at Department of Mechatronics, University of Engineering and Technology, Phase 5, opposite to Sui Northern Gas Pipeline office, Postal address 25000, Peshawar.vi. Muhammad Rizwan Siddiqui6Lecturer at Capital University of Science and technology (CUST), Islamabad Expressway, Kahuta\u060c Road Zone-V Sihala, Islamabad, Islamabad Capital Territory.________________________________________5. PLOS authors have the option to publish the peer review history of their article. If published, this will include your full peer review and any attached files.Author\u2019s response: The authors agree to publish the review history of the article.________________________________________Please carefully address the comments of two reviewers to improve your paper.Editor\u2019s comments,Concern #1:Is the manuscript technically sound, and do the data support the conclusions?The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.Reviewer #1: PartlyAuthor response: The authors appreciate the positive remarks of the reviewer #1 and would like to reflect on any particular section of the manuscript to improve its quality, if highlighted by the reviewer. Reviewer #2: YesAuthor response: The authors appreciate the positive response of the reviewer #2.________________________________________Concern #2:Has the statistical analysis been performed appropriately and rigorously?Reviewer #1: I Don't KnowAuthor response: The authors would be happy to elaborate further on the techniques used in the article, provided specific questions are asked to be addressed. Reviewer #2: YesAuthor response: The authors appreciate the feedback of the reviewer #2.________________________________________Concern #3:Have the authors made all data underlying the findings in their manuscript fully available?The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data\u2014e.g., participant privacy or use of data from a third party\u2014those must be specified.Reviewer #1: YesAuthor response: The authors appreciate the positive feedback of the reviewer #1.Reviewer #2: YesAuthor response: The authors appreciate the positive feedback of the reviewer #2.________________________________________Concern #4:Is the manuscript presented in an intelligible fashion and written in standard English?PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.Reviewer #1: YesAuthor response: The authors appreciate the positive feedback of the reviewer #1.Reviewer #2: YesAuthor response: The authors appreciate the positive response of the reviewer #2.________________________________________5. Review Comments to the AuthorPlease use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. Reviewer #1: 1. There are places in the article that do not indicate which equation to cite, such as the paragraph above Equation 7.Author\u2019s response: The author appreciates reviewer #1 comment on this issue, as it was mistakenly left by author in the earlier draft. This mistake has been rectified and all equations are numbered properly as well as cited properly in the new draft. Equations after correction are highlighted yellow in the new draft.________________________________________2. What does P.S. (3.1) mean in the descriptive paragraph of Figure 6?Author\u2019s response: This question has been answered more properly and the issue has been addressed in the new draft in the last paragraph of methodology section, being highlighted in yellow there. Hope this will answer reviewer\u2019s question.________________________________________3. In the analysis of Figure 6, it is mentioned that four different regions can be set for values beyond the predefined thresholds, but it is not clearly described what the four regions are and how to define the boundary between the four regions.Author\u2019s response: To answer this question, a brief description has been added to the new draft just below fig. (6), in highlighted text. This description has elaborated, in a brief way, on threshold levels and how they were selected. Furthermore, all the four regions regarding \u201cVte\u201d are described and the boundaries between them are clearly defined in that. However, the focus of this paper is on checking the sensitivity of parameters that could serve in creating electrical signatures. Hence, it was found that Vte does the job by responding to various operating conditions in ways different enough from each other to differentiate between faults, which description tells clearly.________________________________________4. Appropriately add some relevant descriptions on how to determine the threshold range.Author\u2019s response: This question has already been answered in question 3. Apart from that, threshold levels are defined more clearly in first paragraph of Methodology section, text highlighted in yellow. Hope that will answer the question.________________________________________5. Add some comparisons with other scholars' work on electrical methods.Author\u2019s response: A comparison with other scholars\u2019 work has been made at the end of results and discussions section. Highlighted in yellow.________________________________________6. Further elaborate on the specific advantages or innovations of this method.Author\u2019s response: It has been answered in the same description where comparison with other scholars\u2019 work has been made, as stated above.________________________________________Reviewer #2: 1. The introduction of this submission should be improved further. The key differences between your work and previous studies should be clarified. A point-to-point way is recommended to state the main contributions.Author\u2019s response: The paper has been reviewed by the author after reviewers\u2019 comments and the content has been improved. Hope it will satisfy the reviewer.________________________________________2. PV modelling section is very traditional, and this reviewer suggests this section can be shortened properly.Author\u2019s response: Author appreciate reviewer\u2019s comment on PV modelling. In response, it is stated that PV modelling contains equations for modelling the related text to describe the terms used in model equations. No extra content has been added and author has aimed to contain modelling section in as small portion as possible.________________________________________3. The writing style of this work likes a technical report, but not a scientific paper.Author\u2019s response: Paper has been revised with minor changes as per reviewers\u2019 comments. Hopefully the new draft resembles more like paper.________________________________________ 4. A comparison of your method and other work should be carried out.Author\u2019s response: Done at the end of Results and Discussion chapter.________________________________________ 5. Most of the figures have poor quality, such as, lacking unit.Author\u2019s response: This issue has been addressed in the new draft, where care has been taken while generating new figures for the paper.AttachmentResponse to Reviewers.docxSubmitted filename: Click here for additional data file. 17 Nov 2021A model-based approach for detecting and identifying faults on the D.C. side of a P.V. system using electrical signatures from I-V characteristicsPONE-D-21-20995R1Dear Dr. Khan,We\u2019re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.Within one week, you\u2019ll receive an e-mail detailing the required amendments. When these have been addressed, you\u2019ll receive a formal acceptance letter and your manuscript will be scheduled for publication.http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at onepress@plos.org.If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they\u2019ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact Kind regards,Lei Chen, Ph.D.Academic EditorPLOS ONEAdditional Editor Comments :Reviewers' comments:Reviewer's Responses to Questions Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the \u201cComments to the Author\u201d section, enter your conflict of interest statement in the \u201cConfidential to Editor\u201d section, and submit your \"Accept\" recommendation.Reviewer #1:\u00a0All comments have been addressedReviewer #2:\u00a0All comments have been addressed********** 2. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1:\u00a0YesReviewer #2:\u00a0Yes********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1:\u00a0YesReviewer #2:\u00a0Yes********** 4. Have the authors made all data underlying the findings in their manuscript fully available?PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data\u2014e.g. participant privacy or use of data from a third party\u2014those must be specified. The Reviewer #1:\u00a0YesReviewer #2:\u00a0Yes********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.Reviewer #1:\u00a0YesReviewer #2:\u00a0Yes********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. Reviewer #1:\u00a0(No Response)Reviewer #2:\u00a0(No Response)********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.If you choose \u201cno\u201d, your identity will remain anonymous but your review may still be made public.Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1:\u00a0NoReviewer #2:\u00a0No 23 Dec 2021PONE-D-21-20995R1 A model-based approach for detecting and identifying faults on the D.C. side of a P.V. system using electrical signatures from I-V characteristics Dear Dr. Khan:I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department. onepress@plos.org.If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact plosone@plos.org. If we can help with anything else, please email us at Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staffon behalf ofProfessor Lei Chen Academic EditorPLOS ONE"} +{"text": "Mycobacterium smegmatis mc2 155 Siphoviridae temperate phages with 52,935 and 41,876 base pairs in genome length, respectively. HarryOW belongs to the A1 subcluster and Peeb to the G1 subcluster. They were isolated and annotated by students from the SUNY Old Westbury Science and Technology Entry Program.HarryOW and Peeb are Mycobacterium smegmatis mc2 155 Siphoviridae temperate phages with 52,935 and 41,876 base pairs in genome length, respectively. HarryOW belongs to the A1 subcluster and Peeb to the G1 subcluster. They were isolated and annotated by students from the SUNY Old Westbury Science and Technology Entry Program.HarryOW and Peeb are Mycobacterium smegmatis mc2 155 following two cycles of purification and amplification in 7H9 top agar at 37\u00b0C . Isolation was done by enrichment with at 37\u00b0C 1). DNA. DNAMyco at 37\u00b0C . The rawhttp://cobamide2.bio.pitt.edu/computer.htm), and gene prediction tools GLIMMER v3.0 with whom it shares 94.32% gene content and 58 gene phams. The integration cassette genes of Peeb are SEA_PEEB_32, a tyrosine integrase; SEA_PEEB_33, an immunity repressor; and SEA_PEEB_34, an excise. SEA_PEEB_32 and SEA_PEEB_33 are the only two genes in the Peeb genome coded on the reverse strand. SEA_PEEB_54 is a mycobacteriophage mobile element 1 (MPME1) found in other members of this gene pham and of this cluster , a GC content of 66.6%, 62 genes, and an 11-bp 3\u2032 sticky overhang terminus. A total of 23 of the 62 genes are of known function. Peeb is 99% genetically identical to Schiebel (GenBank accession no. cluster .NC_052461), sharing 80.02% gene content similarity and 74 phams. As in many of the A cluster phages, the immunity repressor SEA_HARRYOW_76 is not located adjacent to the serine integrase SEA_HARRYOW_37, and the DNA primase has two overlapping reading frames, namely, SEA_HARRYOW_54 and SEA_HARRYOW_55. One of the three minor tail protein genes, SEA_HARRYOW_6, is located in the left arm of the genome, upstream of the lysin A SEA_HARRYOW_10, lysin B SEA_HARRYOW_11, and the terminase SEA_HARRYOW_12. The other two minor tail protein genes, namely, SEA_HARRYOW_26 and SEA_HARRYOW_28, are located after the tape measure gene SEA_HARRYOW_25.HarryOW is an A1 cluster mycobacteriophage with 52,935 bp, a GC content of 63.8%, 94 genes, and an 11-bp 3\u2032 sticky overhang terminus. A total of 35 of the 94 genes are of known function. Forty genes are on the forward strand, mostly on the left arm of the genome, except for genes SEA_HARRYOW_90, SEA_HARRYOW_91, and SEA_HARRYOW_93. HarryOW is 98% genetically identical to Rutherferd (GenBank accession no. The genome sequences for Peeb and HarryOW have been deposited in GenBank and the Sequence Read Archive see ."} +{"text": "Ovis aries) is an important agricultural species raised for meat, wool, and milk across the world. A high-quality reference genome for this species enhances the ability to discover genetic mechanisms influencing biological traits. Furthermore, a high-quality reference genome allows for precise functional annotation of gene regulatory elements. The rapid advances in genome assembly algorithms and emergence of sequencing technologies with increasingly long reads provide the opportunity for an improved de novo assembly of the sheep reference genome.The domestic sheep (Short-read Illumina (55\u00d7 coverage), long-read Pacific Biosciences (75\u00d7 coverage), and Hi-C data from this ewe retrieved from public databases were combined with an additional 50\u00d7 coverage of Oxford Nanopore data and assembled with canu v1.9. The assembled contigs were scaffolded using Hi-C data with Salsa v2.2, gaps filled with PBsuitev15.8.24, and polished with Nanopolish v0.12.5. After duplicate contig removal with PurgeDups v1.0.1, chromosomes were oriented and polished with 2 rounds of a pipeline that consisted of freebayes v1.3.1 to call variants, Merfin to validate them, and BCFtools to generate the consensus fasta. The ARS-UI_Ramb_v2.0 assembly is 2.63 Gb in length and has improved continuity (contig NG50 of 43.18 Mb), with a 19- and 38-fold decrease in the number of scaffolds compared with Oar_rambouillet_v1.0 and Oar_v4.0. ARS-UI_Ramb_v2.0 has greater per-base accuracy and fewer insertions and deletions identified from mapped RNA sequence than previous assemblies.The ARS-UI_Ramb_v2.0 assembly is a substantial improvement in contiguity that will optimize the functional annotation of the sheep genome and facilitate improved mapping accuracy of genetic variant and expression data for traits in sheep. Ovis aries) is a globally important livestock species raised for a variety of purposes including meat, wool, and milk. Domestication likely occurred in multiple events \u223c11,000 years ago [The domestic sheep , is based on a combination of Pacific Biosciences RSII WGS long-read and Illumina short-read sequencing. It has an improved contig NG50 of 2.9 megabases (Mb) and is generally regarded as the official reference assembly for global sheep research.Genome research in sheep holds promise to improve efficiency and sustainability of production and reduce the environmental effects of animal agriculture . The firde novo assembly of the same Rambouillet ewe used for Oar_rambouillet_v1.0, based on \u223c50\u00d7 coverage of nanopore reads (N50 47 kb)\u00a0and\u00a075\u00d7 coverage Pacific Biosciences (PacBio) reads (N50 13 kb).\u00a0The\u00a0new assembly, ARS-UI_Ramb_v2.0, offers a 15-fold improvement in contiguity and increased accuracy, providing a basis for regulatory element annotation in the FAANG project and facilitating the discovery of biological mechanisms that influence traits important in global sheep research and production.The continued maturation of long-read sequencing technologies provided an opportunity to improve upon the sheep reference genome assembly. Because most of the proposed FAANG annotation assays had already been performed on the Rambouillet ewe, lung tissue from the same animal was chosen for DNA extraction. This allowed the use of existing long-read data to supplement new, longer-read, Oxford Nanopore PromethION sequencing. We report a The full-blood Rambouillet ewe used for this genome assembly . The final aqueous phase was transferred to a 50-mL conical tube and the DNA precipitated with 2\u00a0mL of 5 M\u00a0ammonium acetate and 15\u00a0mL of ice-cold 100% ethanol. The DNA was pulled from the alcohol using a Pasteur pipet \u201chook\u201d and placed in 10\u00a0mL of cold 70% ethanol to wash the pellet. The ethanol was poured off and the DNA pellet dried for 20\u201330 minutes, then dissolved in a dark drawer at room temperature for 48 hours in 1\u00a0mL of 10 mM\u00a0Tris-Cl pH 8.5. Library preparation for Oxford Nanopore long-read sequencing was performed with an LSK-109 template preparation kit as recommended by the manufacturer with modifications as described by Logsdon [DNA was extracted from \u223c50\u00a0mg of lung tissue using phenol:chloroform-based method as described . Briefly Logsdon . The lig Logsdon . Fastq fSequence data used in the previous Oar_rambouillet_v1.0 assembly were retrieved from the SRA listed under project number PRJNA414087 . PacBio RRID:SCR_015880) through the trimmed reads stage of assembly. Parameters for contig construction were set as \u201cbatOptions = -dg 4 -db 4 -mo 1000\u201d [Contigs were assembled with Oxford Nanopore and PacBio reads generated as described above using canu v1.8 [Two Hi-C datasets from liver tissue from 2 different library preparations were retrieved as described above. The Hi-C reads were first aligned to the polished contigs using the Arima Genomics mapping pipeline . This pi_006646) . Salsa v_006646) .The Hi-C reads were aligned to the scaffolded assembly with the Arima Genomics mapping pipeline and then processed with PretextMap to visually evaluate the scaffolds as a contact map in PretextView . The scaRRID:SCR_016157) [RRID:SCR_012954) [RRID:SCR_010761) [RRID:SCR_005227) [k-mer consequences of variant calls and filters unsupported variants.Gap filling was completed with pbsuite v15.8.24 using both the PacBio and Oxford Nanopore reads. Nanopolish v0.12.5 with the_016157) was used_016157) . The chr_012954) . The mit_010761) variant _005227) , which ePRJEB35292, specifically under SRA run numbers ERR3665717 (skin), ERR3728435 , ERR3650379 (pituitary), ERR3665711 (lymph node mesenteric), and ERR3650373 (abomasum pylorus). Reads were trimmed with Trim Galore v0.6.4 [RNA sequencing data were generated from 5 tissues including skin, thalamus, pituitary, lymph node (mesenteric), and abomasum pylorus collected from the animal used to assemble the reference genome. Details regarding the RNA isolation protocol, library preparation, and sequencing as well as the raw data can be found in GenBank under BioProject e v0.6.4 and alige v0.6.4 . Indels e v0.6.4 .The annotation for ARS-UI_Ramb_v2.0, NCBI Ovis aries Annotation Release 104, is available in RefSeq and other NCBI genome resources .Here we also provide a liftover of the annotation for Oar_rambouillet_v1.0 onto ARS-UI_Ramb_v2.0. The annotation used for the liftover was NCBI v103 GCF_002742125.1_Oar_rambouillet_v1.0_genomic.fna.gz. The GFF3 format gene annotation file was prepared for processing using liftOff v1.5.2 [RRID:SCR_016582) [PRJNA414087 and PRJEB35292) to estimate transcript-level expression for every tissue as transcript per million mapped reads (TPM) and compared across the 3 annotations.To compare the breakdown of transcripts captured by the 3 annotations , we generated transcript expression estimates using Kallisto v0.44.0 . For theGCF_016772045.1, and statistics of contigs and scaffolds following initial polishing, scaffolding with Hi-C data, and manual editing, gap-filling, and final polishing, are shown in Table\u00a0The 4 flow cells of PromethION data produced 136 Gb of WGS sequence (\u223c51\u00d7 coverage) in reads having a read N50 of 47\u00a0kb. The initial generation of contigs used this data as well as 198.1 Gb of RSII data with a read N50 of 12.9\u00a0kb. The ARS-UI_Ramb_v2.0 assembly was submitted to NCBI GenBank under accession number The Themis-ASM pipeline was implk-mer\u2013based quality value and error rates improved with ARS-UI_Ramb_v2.0 compared with Oar_rambouillet_v1.0 and Oar_v4.0. This is also reflected in the proportion of complete assembly based on k-mers (merCompleteness), which is similar between ARS-UI_Ramb_v2.0 and Oar_rambouillet_v1.0 and both are higher than Oar_v4.0. Furthermore, the single-nucletide polymorphism (SNP) and indel quality value (baseQV) were greatest overall in ARS-UI_Ramb_v2.0 (41.84), followed by Oar_rambouillet_v1.0 (40.69) and Oar_v4.0 (32.40). The percentage of short reads not mapped to the genome was \u22641% in all 3 assemblies.The RRID:SCR_015008) v5.2.2 scores with the cetartiodactyla_odb10 dataset and metaeuk gene predictor [The completeness of ARS-UI_Ramb_v2.0 was evaluated by examining the presence or absence of evolutionarily conserved genes in each assembly using BUSCO in the ARS-UI_Ramb_v2.0 assembly were characterized and compared with Oar_rambouillet_v1.0 by mapping 150\u00a0bp paired-end RNA-seq data from skin, thalamus, pituitary, lymph node (mesenteric), and abomasum pylorus generated from the same animal used to assemble the reference genome Table . In all The ARS-UI_Ramb_v2.0 annotation represents a substantial improvement over the annotation on Oar_rambouillet_v1.0. For example, for ARS-UI_Ramb_v2.0 16,500 coding genes have an ortholog to human , and the BUSCO scores demonstrate that 99.1% of the gene models (cetartiodactyla_odb10) are complete in the new annotation versus 98.8% in the previous one. The annotation for ARS-UI_Ramb_v2.0 includes Iso-Sequencing for 8 tissues to improve contiguity of gene models, and CAGE sequencing for 56 tissues to define transcription start sites, that were not used to annotate Oar_rambouillet_v1.0.Using Kallisto we compared the number of expressed transcripts, for the RNA-Seq dataset of 61 tissue samples from Benz2616, across the 3 annotations . There was a considerable increase in the number of transcripts captured by the annotation for ARS-UI_Ramb_v2.0 60,064) relative to Oar_Rambouillet_v1.0 42,058) and the liftover annotation (Ramb1LO2) lifting over the annotation for Oar_Rambouillet_v1.0 onto ARS-UI_Ramb_v2.0. According to the annotation report provided by NCBI , 70% of The ARS-UI_Ramb_v2.0 genome assembly serves as a reference for genetic investigation of traits important in sheep research and production across the world. This genome is assembled from the same animal used in the Ovine FAANG Project, which provides a high-quality basis for epigenetic annotation to serve the international sheep genomics community and scientific community at large.https://www.ncbi.nlm.nih.gov/genome/annotation_euk/Ovis_aries/104/. Ovis aries Annotation Release 104 is also available in RefSeq and other NCBI genome resources [The datasets supporting the results of this article are available in the RefSeq repository, GCF_016772045.1, and in the GigaScience Database . RNA seqesources .Supplementary File 1. Annotation lifted over from Oar_rambouillet_v1.0 to ARS-UI_Ramb_v2.0.Supplementary File 2. RNA file from annotation lift over.Supplementary File 3. Scripts from annotation lift over.giab096_GIGA-D-21-00165_Original_SubmissionClick here for additional data file.giab096_GIGA-D-21-00165_Revision_1Click here for additional data file.giab096_GIGA-D-21-00165_Revision_2Click here for additional data file.giab096_Response_to_Reviewer_Comments_Revision_1Click here for additional data file.giab096_Reviewer_1_Report_Original_SubmissionAaron Shafer -- 8/30/2021 ReviewedClick here for additional data file.giab096_Reviewer_2_Report_Original_SubmissionElizabeth Ross -- 10/10/2021 ReviewedClick here for additional data file.giab096_Supplemental_FilesClick here for additional data file.bp: base pairs; BUSCO: Benchmarking Universal Single-Copy Orthologs; CAGE: cap analysis gene expression; EDTA: ethylenediaminetetraacetic acid; FAANG: Ovine Functional Annotation of Animal Genomes; Gb: gigabase pairs; kb: kilobase pairs; Mb: megabase pairs; NCBI: National Center for Biotechnology Information; PacBio: Pacific Biosciences; SDS: sodium dodecyl sulfate; SNP: single-nucletide polymorphism; USDA: United States Department of Agriculture; WGS: whole-genome shotgun.Funding was provided by Agriculture and Food Research Initiative Competitive grants from the USDA National Institute of Food and Agriculture supporting improvements of the sheep genomes (2013\u201367015-21228) and FAANG activities . Additional funding was received from the International Sheep Genome Consortium (217201191442) and infrastructure support from a grant to R. Gibbs from the NIH NHGRI Large-Scale Sequencing Program (U54 HG003273).D.M.B. was supported by appropriated USDA CRIS project 5090\u201331000-026\u201300-D. T.P.L.S. was supported by appropriated USDA CRIS Project 3040\u201331000-100\u201300D. B.D.R. was supported by appropriated USDA CRIS Project 8042\u201331000-001\u201300-D. The USDA does not endorse any products or services. Mentioning of trade names is for information purposes only. The USDA is an equal opportunity employer.B.M.M., T.P.L.S., D.M.B., and B.D.R. conceptualized the study. B.M.M., N.E.C., M.P.H., and T.P.L.S. selected the animal and collected samples. K.W. and S.C.M. facilitated the generation of RSII, short-read, and Hi-C data. T.P.L.S. facilitated the nanopore long-read data generation. K.M.D., D.M.B., T.P.L.S., B.M.M., and B.D.R. performed the genome assembly, scaffolding, RNA-sequencing alignment, polishing, and quality control. M.S. and E.L.C. contributed the section describing the LiftOff annotation and comparative analysis of transcript expression estimates for the 3 annotations. K.M.D., D.M.B., T.P.L.S., B.M.M., and B.D.R. generated tables and figures and drafted the manuscript. K.M.D., D.M.B., K.W., S.C.M., N.E.C., T.P.L.S., B.M.M., and B.D.R. edited the manuscript. All authors contributed to the article and approved the final version.The authors have no conflicts of interest."} +{"text": "Single-cell RNA sequencing has led to unprecedented levels of data complexity. Although several computational platforms are available, performing data analyses for multiple datasets remains a significant challenge. Here, we provide a comprehensive analytical protocol to interrogate multiple datasets on SingCellaR, an analysis package in R. This tool can be applied to general single-cell transcriptome analyses. We demonstrate steps for data analyses and visualization using bespoke pipelines, in conjunction with existing analysis tools to study human hematopoietic stem and progenitor cells.For complete details on the use and execution of this protocol, please refer to \u2022SingCellaR is an open-source analysis tool for single-cell RNA sequencing data\u2022SingCellaR facilitates various analyses, including data integration and comparison\u2022Step-by-step analysis and visualization using SingCellaR for human hematopoietic cells Single-cell RNA sequencing has led to unprecedented levels of data complexity. Although several computational platforms are available, performing data analyses for multiple datasets remains a significant challenge. Here, we provide a comprehensive analytical protocol to interrogate multiple datasets on SingCellaR, an analysis package in R. This tool can be applied to general single-cell transcriptome analyses. We demonstrate steps for data analyses and visualization using bespoke pipelines, in conjunction with existing analysis tools to study human hematopoietic stem and progenitor cells. This protocol describes a method for analyzing single-cell RNA sequencing (scRNA-seq) datasets using an R package called SingCellaR. In addition to standard functions , similar to those performed by available R and Python analysis tools like Seurat , MonocleSingCellaR supports multiple modalities for visualization, including tSNE, UMAP, force-directed graph (FDG) in two- or three-dimensional embeddings, diffusion maps, violin and bubble plots, and heatmaps. Cells can be identified on these plots according to user-defined parameters, such as\u00a0cluster, donor, tissue of origin, disease state, etc., by accessing relevant information from\u00a0the\u00a0SingCellaR object. Multiple signature gene scores can also be superimposed on the same embeddings. For example, visualization of sets of canonical genes used to distinguish different blood cell lineages can help exploration of the cell types contained within a dataset independently of cell clustering algorithms. To annotate cell types and states, SingCellaR uses gene set enrichment analysis to calculate enrichment scores for user-defined, curated gene sets. This system can be used alongside manual inspection for canonical marker genes, or other\u00a0existing algorithms .We recently demonstrated the utility of the SingCellaR pipeline in a study of human hematopoiesis. Studying the site- and stage-specific changes of normal hematopoiesis over human development is crucial to understanding the origin of disorders that tend to emerge at specific ages. We analyzed scRNA-seq datasets from hematopoietic stem and progenitor cells (HSPCs) from five different tissues sampled across four stages of the human lifetime . This stHere we present a comprehensive protocol for the analytical pipeline demonstrating step-by-step data analysis and visualization as employed in the recent publication . We coveTiming: 5\u201310\u00a0minIn this protocol, we use scRNA-seq datasets of hematopoietic stem and progenitor cells (HSPCs) from human tissues across different developmental stages, including early fetal liver (eFL), matched fetal liver (FL) and bone marrow (FBM) isolated from the same fetuses, pediatric bone\u00a0marrow (PBM), and adult bone marrow (ABM). Here, human fetal liver samples from first\u00a0and second trimester are referred to eFL and FL respectively. Table\u00a0S1 in \u00a0shows thhttps://doi.org/10.5281/zenodo.5879071. Zenodo (https://zenodo.org/) is an open and citable repository for sharing curation and publication of data and software from research outputs, regardless of data format, size, and access restrictions or license.We have provided pre-processed datasets and available gene sets on Zenodo: 1.cellranger pipeline outputs of samples from different tissues: the file \u2018cellranger_output.zip\u2019 is a zipped folder containing the cellranger pipeline results of 9 samples ;2.human signature gene sets that are used in this protocol;3.codes (Code.zip) used in this protocol;4.generated R objects from this protocol.The compiled datasets consist of:https://support.10xgenomics.com/single-cell-gene-expression/software/pipelines/latest/what-is-cell-ranger) contains analysis pipelines used to align sequencing reads to the reference genome, generate feature-barcode matrices, and perform other downstream analyses. We performed cellranger count as described in https://support.10xgenomics.com/single-cell-gene-expression/software/pipelines/latest/using/count to obtain feature-barcode matrices for each library. The outputs of the pipeline include a Matrix Market file of gene expression (matrix.mtx.gz), cell barcode (barcodes.tsv.gz) and gene metadata (features.tsv.gz) files.) {>\u00a0install.packages(\"devtools\")> }> if(!require(BiocManager)) {>\u00a0install.packages(\"BiocManager\")> }> devtools::install_githubCRITICAL: We tested SingCellaR installation on macOS Mojave and Catalina, and Windows 10.Before we begin, the user has to install SingCellaR and other dependency packages. R packages are hosted across multiple repositories, namely Comprehensive R Archive Network (CRAN), Bioconductor, and Github. For a brief introduction, CRAN is the R central software repository for the latest and previous versions of the R distribution and packages. Bioconductor is the R repository to facilitate R packages developed for biological data analysis. GitHub is a commercial repository that hosts services for individuals and teams for software version control and collaboration. The R functions \u2018install.packages\u2019, \u2018BiocManager::install\u2019, and \u2018devtools::install_github\u2019 will be used to install packages from CRAN, Bioconductor, and Github, respectively. Prior to installation, the function \u2018if(!require(\"package_name\"))\u2019 will be used to check if the package has already been installed on a computer, and installation will proceed for packages not yet installed on the computer.2.a.> library(reticulate)> conda_create> py_install> py_installModules required for the force-directed graph analysis.b.> py_installCRITICAL: We tested the fa2 python module for the force-directed graph analysis using Python versions 2.7, 3.6 and 3.8. Using Conda environment is recommended in conjunction with reticulate package. Conda (https://docs.conda.io/en/latest/) is a virtual environment management system for Python. With Conda, the user can create, remove, and update environments that have different versions of Python packages installed in them. This flexibility is especially helpful on devices in which the user does not have administrative privileges to install Python packages on the device and compile version control for specific packages.Module required for doublet removal using Scrublet.Install required python modules by running the following R code:3.a.> if(!require(harmony)) {>\u00a0install.packages(\"harmony\")> }harmony \u2013 required for data integration using Harmony method .> if(!reb.> if(!require(AUCell)) {>\u00a0BiocManager::install(\"AUCell\")> }AUCell \u2013 required for computing AUCell scores with specified gene signatures .> if) {>\u00a0install.packages> }> if(!require(doRNG)) {>\u00a0install.packages(\"doRNG\")> }doParallel and doRNG \u2013 required for parallel processing in AUCell analysis.d.> if(!require(DAseq)) {>\u00a0devtools::install_github(\"KlugerLab/DAseq\")> }DAseq \u2013 required for the analysis of differential abundance .> if(!ree.> if(!require(destiny)) {>\u00a0BiocManager::install(\"destiny\")}destiny \u2013 required for the trajectory analysis using diffusion map .> ifNote: The destiny package is not available for Bioconductor version 3.13. The user can install this package from GitHub.> install_githubOptional: SingCellaR supports multiple dataset integration and batch correction methods including Scanorama Seurat CCA/RPCA \u2013 required for the data integration.i.> install.packages(\"rliger\")rliger \u2013 required for the data integration.j.> if )\u00a0install.packages(\"BiocManager\")> BiocManager::install(\"sva\")ComBat \u2013 required for the batch removal.k.> if )\u00a0install.packages(\"BiocManager\")> BiocManager::install(\"limma\")Limma \u2013 required for the batch removal.Install required R packages by running the following R code:R version 4 or higher is required with other R packages, as shown in the Timing: 10\u201315\u00a0min for each sample, depending on the number of cells being analyzed4.Load SingCellaR package into R environment:> library(SingCellaR)> setwd(\"./SingCellaR_objects\")Note: \u2018./SingCellaR_objects\u2019 is a local folder for this analysis. The user can change the folder name to a suitable file path in a local computer or server.5.Create SingCellaR object. SingCellaR object is an extension of the SingleCellExperiment (periment object f> data_matrices_dir<-\"./cellranger_output/eFL1/\"> eFL_1<-new(\"SingCellaR\")> eFL_1@dir_path_10x_matrix<-data_matrices_dir> eFL_1@sample_uniq_id<-\"eFL_1\"> load_matrices_from_cellranger> eFL_1This step creates a SingCellaR object for each sample and performs quality control (QC) to identify\u00a0cells that qualify for further downstream analyses. We demonstrate the selection process\u00a0of high-quality cells using multiple QC plots, data normalization, and identifying highly variable genes. The process starts from reading in the input files for each sample generated directly from the cellranger pipeline. Here, we show an early fetal liver (eFL_1) dataset as an example.CRITICAL: The \u2018cellranger.version\u2019 parameter is required to be compatible with the cellranger pipeline output file. The cellranger output file \u2018features.tsv.gz\u2019 for gene features is generated from Cell Ranger version 3 and above, whereas version 2 of Cell Ranger creates the file \u2018genes.tsv.gz\u2019.6.Create cell metadata. Cell metadata will be created. Rows represent cells and columns represent variables. Variables computed in this step include the number of UMIs and detected genes per cell, and percentage of mitochondrial gene expression for each cell.> process_cells_annotationa.mito_genes_start_with: Gene names starting with \u2018MT-\u2019 are used as a set of mitochondrial genes for human and \u2018mt-\u2019 for mouse samples.Note: The cell metadata can be accessed using the function \u2018get_cells_annotation(eFL_1)\u2019 or eFL_1@meta.data. The user can manually add additional information to the columns of the cell metadata.The following parameter is required:7.Visualize QC matrices. QC matrices computed in step 6 can be explored using the plotting functions:a.> plot_cells_annotationHistogram A.> plot_b.> plot_cells_annotationBoxplot B.> plot_c.> plot_UMIs_vs_Detected_genes(eFL_1)Plot of the number of UMIs versus the number of detected genes per cell C.> plot_Now the SingCellaR object \u2013 eFL_1 is created.8.Annotate cell quality, identify expressed genes, and assign cell and gene status into metadata. After visualizing QC matrices in step 7, the user can specify filtering parameters using the observed number of UMIs and detected genes per cell, and percentage of mitochondrial gene expression. The function \u2018filter_cells_and_genes\u2019 assigns a new column named IsPassed that will be added into the cell metadata. Cells that pass QC will be annotated as TRUE. Expressed genes will be identified in this step and the column named IsExpress will be added into the gene metadata. Genes expressed above and below the user-defined threshold will be annotated as TRUE or FALSE. Although all original cells and genes are retained in the metadata and gene expression matrix in this step, these annotations will be used to subset cells and expressed genes for further downstream analyses. The number of cells passing QC thresholds will be shown on the R console after running the following code:> filter_cells_and_genesa.min_UMIs: The lower threshold for UMI counts, above which cells are annotated as high-quality. To be used in conjunction with max_UMIs argument. Default value is 1,000.b.max_UMIs: The upper threshold for UMI counts, below which cells are annotated as high-quality. To be used in conjunction with min_UMIs argument. Default value is 30,000.c.min_detected_genes: The lower threshold for the number of expressed genes, above which cells are annotated as high-quality. To be used in conjunction with max_detected_genes argument. Default value is 1,000.d.max_detected_genes: The upper threshold for the number of expressed genes, below which cells are annotated as high-quality. To be used in conjunction with min_detected_genes argument. Default value is 8,000.e.genes_with_expressing_cells: The lower threshold for the number of cells in which a gene is expressed (UMI >=1), above which, the gene will be annotated as expressed. Default value is\u00a010.f.isRemovedDoublets: If set to TRUE (default), doublets will be removed prior to downstream analyses.Note: Parameters used for filtering out cells with low quality can be varied across tissues and organs from different datasets. For example, the percentage threshold for mitochondrial gene expression, \u2018max_percent_mito\u2019 can be set higher for liver tissues The following parameters are described:9.Normalize UMI counts. SingCellaR scales UMI counts by normalizing each library size to 10,000 or mean library size.> normalize_UMIsThe following parameter is required:use.scaled.factor: When set to TRUE, the gene expression values will be multiplied by 10,000 (by default) and normalized against the library size of each cell. The user can change the scale factor value using the scale.factor parameter. If set to FALSE, the function will use the mean of library size across all cells as the scale factor.CRITICAL: Using use.scaled.factor=TRUE is recommended for 10\u00d7 genomics data. The user should consider specifying use.scaled.factor=FALSE for scRNA-seq data generated from plate-based protocols, e.g., Smart-seq2 as the fitting method for gene expression and coefficient of variation to identify highly variable genes. The GLM is suitable for modeling log-normal data from a sparse normalized gene expression matrix. A column named IsVarGenes is added to the gene metadata and genes identified as highly variable are annotated as TRUE, while all other genes are annotated as FALSE. The number of genes used to fit in the model and the number of identified variable genes will be shown after running the following code:> get_variable_genes_by_fitting_GLM_modela.mean_expr_cutoff: The mean normalized expression value, above which, the genes are identified as highly variable. Default value is 0.1.b.disp_zscore_cutoff: The dispersion of z-score, above which, the genes are identified as highly variable. Default value is 0.1.Note: In this step, we used lower cut-off values to increase the number of detected highly variable genes per sample for the downstream analyses.The following parameters are required:11.Save the R object for further analyses.> save12.Repeat the analyses for the rest of samples. R codes are available at Zenodo: https://doi.org/10.5281/zenodo.5879071.Pause point: The user can pause the analysis after pre-processing each sample.Timing: 15\u201330\u00a0min for each group, 1\u20132\u00a0h for all stages13.Load SingCellaR package.> library(SingCellaR)14.Integrate pre-processed biological replicates. The user will initialize the integrated SingCellaR class object \u2018SingCellaR_Int\u2019 and assign a unique identifier prior to merging two datasets using the \u2018preprocess_integration\u2019 function.> eFL <- new(\"SingCellaR_Int\")> eFL@dir_path_SingCellR_object_files<-\"./\"> eFL@SingCellR_object_files=c> preprocess_integration(eFL)> eFL15.Annotate cell quality and expressed genes. Filtering process has been already performed separately for each sample (see step 8), therefore the filtering parameters for this step will be set to include all cells. From the filtering output below, all cells will be retained, after running the following code:> filter_cells_and_genes16.Normalize and scale UMI counts.> normalize_UMIsNote: See step 9 for details of required parameters.17.Regress out confounding factors. The normalized and scaled gene expression values from step 16 will be adjusted by regressing out cell-to-cell variation in gene expression values due to confounding factors . To this end, SingCellaR implements the \u2018lmFit\u2019 function from the R package limma ge limma , SingCela.residualModelFormulaStr: The formula format used to regress out confounding factors. The names of variables defined have to be the same as the column names of the cell metadata.Note: The user can change the residualModelFormulaStr parameter and perform the following steps down to step 23 to explore the effect on cell clustering by specifying different sets of confounding factors. For example, the user can set residualModelFormulaStr\u00a0= \"\u223cUMI_count+percent_mito\" to compare with residualModelFormulaStr\u00a0= \"\u223cUMI_count+percent_mito+sampleID\" to explore the effect of sample.The following parameter is required:18.Identify highly variable genes. The number of genes used for fitting the GLM model and the number of highly variable genes will be identified after running the following code:> get_variable_genes_by_fitting_GLM_modelNote: See step 10 for details of required parameters.19.Remove selected genes. Here, we remove mitochondrial and ribosomal genes from highly variable genes identified from step 18 to avoid the skewed effect of ribosomal and mitochondrial gene expression in downstream analyses. The number of genes that are excluded will be shown after running the following code:> remove_unwanted_genes_from_variable_gene_set)Note: The human.ribosomal-mitochondrial.genes.gmt file can be downloaded from Zenodo: https://doi.org/10.5281/zenodo.5879071 under the folder Human_genesets.20.Visualize highly variable genes A.Figure\u00a0> plot_variable_genes(eFL)21.Perform principal component analysis (PCA). To interpret the relationship across single cells, dimensionality reduction methods are required to reduce high dimensionality data to the visualizable two- or three-dimensional space. In PCA, the reduced dimensional space is represented by principal components. The top PCs will capture most of the variance of the dataset. Here, we perform linear dimensionality reduction using PCA and then bring forward the most informative PCs for further nonlinear dimensionality reduction to visualize cells in a two-dimensional space. To this end, highly variable genes identified from steps 18\u201319 and visualized in step 20 will be used for PCA using the \u2018runPCA\u2019 function, a wrapper function for the \u2018irlba\u2019 function from the irlba package (http://bwlewis.github.io/irlba/).> SingCellaR::runPCAa.use.components: The number of principal components (PCs) to estimate. Default value is 50.b.use.regressout.data: If set to TRUE (default), the adjusted gene expression values from step 17 will be used.The following parameters are required:22.Visualize principal components. The elbow plot is used to determine the number of PCs to be included for further dimensionality reduction analyses 23.Perform nonlinear dimensionality reduction analyses. Running nonlinear dimension reduction on all highly variable genes requires high computational resources and processing time. Hence, using identified PCs from step 22 for nonlinear dimension reduction analysis is a standard technique for scRNA-seq analysis. Nonlinear dimension reduction analysis is suitable for capturing cellular heterogeneity (https://cran.r-project.org/web/packages/uwot/). Based on the elbow plot shown in step 22, we select the first 30 PCs for UMAP analysis.> SingCellaR::runUMAPogeneity . Here, wogeneity by runnia.dim_reduction_method: The method name for the linear dimension reduction.b.n.dims.use: The number of selected PCs as determined in step 22.c.n.neighbors: The number of neighboring cells used for manifold approximation. Default value is 30.d.uwot.metric: The distance metric name. Default is \u2018cosine\u2019.Note: The user can also apply the t-Distributed Stochastic Neighbor Embedding (tSNE) approach using the \u2018runTSNE\u2019 function. The reduced dimension coordinates for UMAP and tSNE can be accessed using functions \u2018get_umap.result\u2019 and \u2018get_tsne.result\u2019 respectively.The following parameters are required:24.Visualize cell lineages on low dimensional space. To explore whether the cells in each lineage are clustered in close proximity, we will visualize the UMAP result with the expression of multi-lineage gene sets using the \u2018plot_umap_label_by_multiple_gene_sets\u2019 function ,\u00a0custom_color\u00a0= c,\u00a0isNormalizedByHouseKeeping\u00a0= F,\u00a0point.size\u00a0= 1)function C. Here, a.gmt.file: Path to the file containing the gene signatures in GMT format.b.show_gene_sets: The names of the gene signatures to plot on UMAP.c.custom_color: The color assignment to each signature specified in \u2018show_gene_sets\u2019.d.isNormalisedByHouseKeeping: When set to TRUE (default), the gene expression values of the individual genes of each gene signature specified will be normalized by the housekeeping genes. The housekeeping genes are defined as the top 100 genes with the highest total gene expression values across all cells.e.point.size: Size of the data points on UMAP plot. Default value is 2.Note: Lineage gene sets (human.signature.genes.v1.gmt) are available at Zenodo: https://doi.org/10.5281/zenodo.5879071 under Human_genesets folder and from original publication is used for cell clustering.> identifyClustersa.n.dims.use: The number of PCs to use. Default number is 30.b.dim_reduction_method: The dimensionality reduction analysis name.c.n.neighbors: The number of neighboring cells. This number may be the same as specified in step 23 if UMAP is used. Default number is 30.d.knn.metric: The distance metric, \u2018euclidean\u2019 is used by default. Another option is \u2018cosine\u2019.Note: The cluster metadata for each cell can be accessed using the \u2018get_cluster\u2019 function.The following parameters are required:26.Visualize clusters on low dimensional space. Louvain clusters can be shown on UMAP using the \u2018plot_umap_label_by_clusters\u2019 function function D.> plot_a.show_method: The clustering detection name used as in step 25.b.mark.clusters: If set to TRUE (default) , cluster identifiers will be shown on the plot.The following parameters are required:27.Identify cluster-specific genes. The user can identify marker genes, which are particularly expressed in each cluster using the \u2018findMarkerGenes\u2019 function. Differentially expressed gene analysis between one cluster against all other clusters is performed using the nonparametric Wilcoxon test on normalized expression values for the comparison of expression level and Fisher\u2019s exact test for the comparison of expressing cell frequency . This process is iterated for each cluster against all other clusters, therefore the processing time in this step is dependent on the number of cells and clusters. By default, the minimum log2 fold change \u2018min.log2FC\u2019 parameter is set to 0.5 and the minimum fraction of expressing cells in each cluster \u2018min.expFraction\u2019 parameter is set to 0.3.> findMarkerGenesrequency . P-valuea.cluster.type: The clustering detection method used in step 25.The following parameter is required:28.Save R object for further analyses.> save29.Repeat the integration process for all biological replicates of FL, FBM, and PBM samples. There is only one sample for ABM. Therefore, data integration is not required for this sample at this step. All R codes provided for each integration are available at Zenodo: https://doi.org/10.5281/zenodo.5879071.Pause point: The user can pause the analysis after integrating the biological replicates for each developmental stage and save the results in multiple SingCellaR objects.This step integrates the individual R objects from pre-processed biological or technical replicates generated from step 12. Here, we illustrate the integration of two early fetal liver samples collected from two donors .13.Load Timing: 2\u20133 hThe aim of integrating all samples is to assess the existence of batch or donor-specific effects that are confounding factors contributing to differences in gene expression profile across samples. Examples of batch effect include differences in library preparation methods, sequencing batch, and donor or sample ID . If the 30.Load SingCellaR package.> library(SingCellaR)31.Initialize the SingCellaR_Int object and merge datasets generated from step 29.> Human_HSPC <- new(\"SingCellaR_Int\")> Human_HSPC@dir_path_SingCellR_object_files<-\"./\"> Human_HSPC@SingCellR_object_files=c> preprocess_integration(Human_HSPC)> Human_HSPC32.Annotate cell quality. Input parameters for integrated samples have been set to include all cells. The user should observe that there are no cells being filtered out after running the following code:> filter_cells_and_genes33.Incorporate donor and sequencing batch information into cell metadata. This information is required to perform the batch correction.> meta.data <- read.delim> Human_HSPC@meta.data<- meta.dataGeneral examples of data integration include integrating samples from healthy donors and patients and from> head(Human_HSPC@meta.data)34.Normalize and scale UMI counts.> normalize_UMIs35.Identify highly variable genes.> get_variable_genes_by_fitting_GLM_model36.Remove ribosomal and mitochondrial genes.> remove_unwanted_genes_from_variable_gene_set)37.Visualize highly variable genes.> plot_variable_genes(Human_HSPC)38.Run PCA.> SingCellaR::runPCA39.Visualize principal components. Based on the elbow plot, the first 40 PCs will be used for data integration.> plot_PCA_Elbowplot(Human_HSPC)40.Integrate data using Supervised Harmony. We introduce Supervised Harmony, a method for data integration implemented in SingCellaR. Supervised Harmony can be performed using the \u2018runSupervised_Harmony\u2019 function. This method is an adaptation of Harmony method ,\u00a0\u00a0\u00a0\u00a0n.dims.use\u00a0= 40,\u00a0\u00a0\u00a0\u00a0hcl.height.cutoff\u00a0= 0.3,\u00a0\u00a0\u00a0\u00a0harmony.max.iter\u00a0= 20,\u00a0\u00a0\u00a0\u00a0n.seed\u00a0= 6)y method . More dey method . Here, sa.covariates: The name(s) of the covariate(s) specified as batch effect to be adjusted. The names should be the same as the column names of the cell metadata.b.n.dims.use: The number of PCs as determined from step 39 to be used in this step.c.hcl.height.cutoff: The cutree cut-off value for hierarchical clustering. Default value is 0.25.d.harmony.max.iter: The maximum number of rounds to run harmony. Default value is 10.e.n.seed: The random seed number generator. Default value is 1.CRITICAL: Before running Supervised Harmony method, the \u2018findMarkerGenes\u2019 function must be performed for each developmental stage analysis (see step 27). The seed number (random number generator) and software version can vary across different devices. Hence, the user may notice variations in the rotation of the plots and clusters, which can be verified and visualized using lineage genes (see step 24).The following parameters are required:41.Nonlinear dimension reduction analysis.> SingCellaR::runUMAPUpdated meta.data can be checked by running:> supervised_harmony.UMAP<-get_umap.result(Human_HSPC)> saveRDS42.Integrate data using Harmony. SingCellaR also implements a wrapper function for Harmony integration method (> library(harmony)> SingCellaR::runHarmony,\u00a0\u00a0\u00a0\u00a0\u00a0n.dims.use\u00a0= 40,\u00a0\u00a0\u00a0\u00a0\u00a0harmony.max.iter\u00a0= 20,\u00a0\u00a0\u00a0\u00a0\u00a0n.seed\u00a0= 6)n method . Harmonya.covariates: The name(s) of the covariate(s) specified as batch effect to be adjusted. The names should be the same as the column names of the cell metadata.b.n.dims.use: The number of PCs as determined from step 39 to be used in this step.c.harmony.max.iter: The maximum number of rounds to run harmony. Default value is 10.d.n.seed: The random seed number generator. Default value is 1.> SingCellaR::runUMAP> harmony.UMAP<-get_umap.result(Human_HSPC)> saveRDSThe UMAP analysis result from Harmony integration will be saved. This UMAP object contains cell metadata and UMAP coordinates that will be used to compare with the results from other integrative methods.The following parameters are required:43.Integrate data using Seurat. SingCellaR implements two wrapper functions for Seurat integration . Due to the fast integration of using RPCA, in this protocol, we will demonstrate the function \u2018runSeuratIntegration_with_rpca\u2019 as an example. However, the user should try Seurat CCA to make a comparison of the integrative results. The user can find how to use the function \u2018runSeuratIntegration\u2019 from SingCellaR\u2019s vignette. After the integration, the UMAP analysis from Seurat RPCA integration will be performed to obtain the embedding.> library(Seurat)> meta.data<-get_cells_annotation(Human_HSPC)> rownames(meta.data)<-meta.data$Cell> SingCellaR::runSeuratIntegration_with_rpcaegration . First, a.Seurat.metadata: The cell metadata.b.n.dims.use: The number of PCs as determined from step 39 to be used in this step.c.Seurat.split.by: The indicated feature name found in the cell metadata for splitting samples for integration.d.Use.SingCellaR.varGenes: If set to TRUE, the highly variable genes identified by SingCellaR will be used. If set to FALSE, the highly variable genes will be identified using Seurat. Default value is FALSE.> SingCellaR::runUMAP> Seurat_rpca.UMAP<-get_umap.result(Human_HSPC)> saveRDSNext, the UMAP analysis from Seurat RPCA integration will be performed and saved. This UMAP object will be used to compare with the results from other integrative methods.The following parameters are required:44.Integrate data using Scanorama. SingCellaR implements a wrapper function for Scanorama integration (egration . Scanora> runScanorama(Human_HSPC)> runPCA> SingCellaR::runUMAP> Scanorama.UMAP<-get_umap.result(Human_HSPC)> saveRDS45.Integrate data using Limma batch correction method. To perform Limma analysis > runPCA> SingCellaR::runUMAP> Limma.UMAP<-get_umap.result(Human_HSPC)> saveRDS46.Assign a cell type to single cells using the AUCell analysis. In this step, we will perform AUCell analysis (> library(AUCell)> exprMatrix <- get_umi_count(Human_HSPC)> human_HSPCs_cells_rankings <- AUCell_buildRankingsanalysis with seva.exprMat: The raw expression count matrix. This can be retrieved from the SingCellaR object using the \u2018get_umi_count\u2019 function.b.nCores: The number of cores to use for parallel processing. The maximum number of cores is dependent on the user\u2019s device. Default value is 1.c.plotStats: If set to TRUE (default), the expression statistics will be summarized and plotted in the histogram and boxplots.Note: This step may be time-consuming. The user is advised to save the output of this step using the following code:> save> human_HSPCs.AUCells.score <- Run_AUCellNext, the AUCell analysis will be performed using the \u2018Run_AUCell\u2019 function.The following parameters are required:d.AUCell_buildRankings.file: The input file name from the AUCell rankings.e.geneSets.gmt.file: The GMT file name that contains gene sets.> SingCellaR::runUMAP>plot_umap_label_by_AUCell_score,Human_HSPC.AUCells.Score,AUCell_cutoff=0.15,point.size\u00a0= 0.5)To explore the AUCell scores on UMAP plots, the user can run UMAP analysis using different types of integrative methods. This step is to identify the AUCell cut-off score for a particular cell type. The example below shows the \u2018plot_umap_label_by_AUCell_score\u2019 function that will be used to plot the myeloid AUCell scores A.> SingC> Human_HSPC.CellType<-Human_HSPC.AUCells.Score> Human_HSPC.CellType$CellType<-\"\"> Human_HSPC.CellType$CellType[Human_HSPC.CellType$HSPC_MPP >0.2]<-\"HSC_MPP\"> Human_HSPC.CellType$CellType[Human_HSPC.CellType$Erythroid >0.15]<-\"Erythroid\"> Human_HSPC.CellType$CellType[Human_HSPC.CellType$Myeloid >0.15]<-\"Myeloid\"> Human_HSPC.CellType$CellType[Human_HSPC.CellType$Lymphoid >0.15]<-\"Lymphoid\"> Human_HSPC.CellType$CellType[Human_HSPC.CellType$Megakaryocyte >0.15]<-\"Megakaryocyte\"> Human_HSPC.CellType$CellType[Human_HSPC.CellType$Eosinophil_Basophil_Mast >0.15]<-\"Eo_Ba_Mast\"> Human_HSPC.CellType$CellType[Human_HSPC.CellType$Endothelial_cells\u00a0>\u00a00.15]<-\"Endothelial_cell\"Next, cells with high AUCell scores for each cell type will be assigned.> table(Human_HSPC.CellType$CellType)> saveRDSThe user can explore the number of cells with high AUCell scores for each cell type using the function below and the Human_HSPC.CellType data frame object will be saved for use as the reference to perform benchmarking explained in the next step.The following parameters are required:The UMAP analysis result from Supervised Harmony integration will be saved. This UMAP object will be used to compare with the results from other integrative methods (see steps below).47.Benchmark distinct integrative methods using LISI and kBET methods. Next, we assess whether single cells with identified cell types derived from the AUCell analysis are clustered well across covariate variables . SingCellaR provides the wrapper functions for a Local Inverse Simpson\u2019s Index (LISI) (k-nearest-neighbor batch-effect test (kBET) (> library(lisi)> reference.celltypes<-\"Human_HSPC.CellType_from_AUC_High.rds\"> integrative.umaps<-c> method.names<-c> runLISIx (LISI) and k-net (kBET) to measut (kBET) B.> libraa.lisi_label1: The covariate variable name of interest such as batch or donor. Default value is donor.b.lisi_label2: The variable name that represents ground truth or high AUC score cell type. Default value is CellType.c.reference.celltype.rds.file: The RDS file name that contains cell type information.d.integrative.umap.rds.files: The RDS file names that contain UMAP coordinate information generated by different integrative methods.e.integrative.method.names: The integrative method names that represent in the same order as in integrative.umap.rds.files.f.IsShowPlot: If set to TRUE (default), the iLISI and cLISI scores will be plotted.> library(kBET)> reference.celltypes<-\"Human_HSPC.CellType_from_AUC_High.rds\"> integrative.umaps<-c> method.names<-c>kBET_result<-runKBETSecond, the \u2018runKBET\u2019 function is performed as shown below. This function will calculate kBET scores across different integrative methods and return a data frame that can be used for plotting.The following parameters are required:g.Covariate_variable_name: The covariate variable name of interest such as batch or donor. Default value is donor.h.reference.celltype.rds.file: The RDS file name that contains cell type information.i.integrative.umap.rds.files: The RDS file names that contain UMAP coordinate information generated by different integrative methods.j.integrative.method.names: The integrative method names that represent in the same order as in integrative.umap.rds.files.k.n.sample: The downsample size of data points used in kBET analysis. Default value is 1,000.Note: This step may be time-consuming, depending on the number of cells downsampled for kBET analysis.> level_order <- factor)> ggplot)\u00a0+\u00a0\u00a0+ geom_boxplot+theme_classic+theme(axis.title.x=element_blank)Next, kBET scores across integrative methods will be plotted using the \u2018ggplot\u2019 function C.> levelIn this step, we illustrate how to benchmark integrative results generated from different methods using the wrapper functions for LISI and kBET analyses implemented in SingCellaR. We show the objective measurement of integration for each method using iLISI and cLISI scores B and kBEThe following parameters are required:48.Visualize selected features and cell lineages on low dimensional space. Here, we will assess whether the data integration was performed successfully. To this end, we annotate the UMAP by sample ID, donor type, sequencing batch, and lineage signature genes. After running the following codes, we observe that cells are clustered by cell lineage, while sample ID, donor type and sequencing batch effects are successfully corrected. These indicate that data integration and batch correction was effective in eliminating batch effect, while enabling functionally related cell to be clustered in close proximity.a.> plot_umap_label_by_a_feature_of_interestAnnotate UMAP plot by sample ID A.> plot_b.> plot_umap_label_by_a_feature_of_interestAnnotate UMAP plot by donor B.> plot_c.> plot_umap_label_by_a_feature_of_interestAnnotate UMAP plot by sequencing batch C.> plot_i.feature: The feature to annotate on UMAP plot. The feature name should match the column name of the cell metadata.ii.point.size: Size of the data points on UMAP. Default value is 1.iii.mark.feature: If set to TRUE (default), the feature name will be shown on the plot.The following parameters are required:d.> plot_umap_label_by_multiple_gene_sets,\u00a0custom_color\u00a0= c,\u00a0isNormalizedByHouseKeeping\u00a0= F,\u00a0point.size\u00a0= 1)Annotate UMAP plot by lineages genes D.> plot_49.Detect and assign clusters.> identifyClustersNote: Information in this step and the required parameters have been detailed in step 25 with the additional \u2018integrative_method\u2019 parameter specified to indicate the data integration and batch correction method used in step 40.50.Visualize clusters on low dimensional space. A.Figure\u00a0> plot_umap_label_by_clusters51.Identify cluster-specific genes. This step will perform differential gene expression analysis to identify marker genes per each cluster.> findMarkerGenesNote: See details in step 27. This step will take time to run on the fully integrated datasets depending on the number of cells and identified clusters.52.Save the integrated R object for further analyses.> savePause point: The user can save the integrative SingCellaR_Int object for further downstream analyses.Timing: 1.5\u20132 hhttps://doi.org/10.5281/zenodo.5879071 in GMT file format and are also available in Table\u00a0S3 from the original publication (53.Load the SingCellaR package.> library(SingCellaR)54.Load the integrated R object generated from step 52.> load(file\u00a0= \"./Human_HSPC_All.SingCellaR.rdata\")55.Generate the pre-ranked genes. For each cluster, differential gene expression analysis is performed erformed against a.cluster.type: The clustering method name.b.fishers_exact_test: The cut-off p-value. Default value is 0.1.c.min.expFraction: The fraction of expressing cells, above which, the gene will be included for GSEA. Default value is 0.01.d.min.log2FC: The log2 fold change, above which, the gene will be included for GSEA. Default value is 0.1.Note: The processing time of this step depends on the number of cells and clusters. The user is advised to save the output of this step using the following code:> saveThe following parameter is required:56.Perform GSEA. For each cluster, the ranked genes are subjected to GSEA to assess the enrichment for all curated hematopoietic gene sets.> fgsea_Results <- Run_fGSEA_for_multiple_comparisonsa.GSEAPrerankedGenes_list: The object containing the ranked genes for each cluster generated from step 55.b.gmt.file: Curated gene sets in GMT file format.Note: Here, we curated gene sets encompassing 75 hematopoietic signatures, but the user can also generate other customized gene sets in GMT file format as the input for GSEA. Each line of the GMT file represents one gene set. Specifically, the first column represents the name of the gene set, the second column represents the description of the gene set,\u00a0and the third column onwards represents the genes that constitute the gene set, whereby\u00a0each column represents one gene. The GMT file should be saved in tab-delimited format.The following parameters are required:57.Visualize GSEA results. A heatmap is used to observe and compare enrichment scores of each gene set (rows) across all clusters (columns). This visualization allows the user to annotate a cell type identity and cell states to each cluster based on the degree of enrichment of the curated gene sets. ne sets. B.> plot_a.isApplyCutoff: If set to TRUE, only the normalized enrichment scores (NES) of gene sets with adjusted P-values below the user-defined values in \u2018adjusted_pval\u2019 argument will be displayed on the heatmap. Default is FALSE.b.use_pvalues_for_clustering: If set to TRUE (default), the -log10 will be used instead of NES to cluster rows and/or columns.c.show_NES_score: If set to TRUE (default), NES will be displayed on the heatmap.d.fontsize_row: The font size of the gene set names along the rows of the heatmap. Default value is 5.e.adjusted_pval: The value, below which, NES will be displayed on the heatmap. The default value is 0.25.f.show_only_NES_positive_score: If set to TRUE, only NES\u00a0>\u00a00 will be displayed on the heatmap. Default is FALSE.g.format.digits: The number of significant digits to be used for numeric display on the heatmap. Default value is 2.h.clustering_method: The clustering method for clustering the rows and/or columns. Default is \"complete\".i.clustering_distance_rows: The distance metric to use when clustering the rows. Default is \"euclidean\".j.clustering_distance_cols: The distance metric to use when clustering the columns. Default is \"euclidean\".k.show_text_for_ns: If set to TRUE (default), non-significant (ns) NES will be displayed on the heatmap.The following parameters are required:58.Visualize selected canonical marker genes using UMAP. One or more individual genes can be plotted on UMAP using the \u2018plot_umap_label_by_genes\u2019 function )# Myeloid progenitor> plot_umap_label_by_genes)# Erythroid progenitor> plot_umap_label_by_genes)# Megakaryocytic progenitor> plot_umap_label_by_genes)# B lymphoid progenitor> plot_umap_label_by_genes)# Dendritic precursor> plot_umap_label_by_genes)# Eosinophil/Basophil/Mast progenitor> plot_umap_label_by_genes)# Endothelial cells> plot_umap_label_by_genes)function A.# HSC/Ma.gene_list: A vector of one or more gene names to plot.The following parameter is required:The aim of cell type annotation is to assign a cell type identity to each cluster. The expression of selected marker genes can be visualized and explored across the different clusters, either using UMAP, dotplot, heatmap, or violin plots . Neverthlication .53.Load 59.Visualize selected canonical marker genes using bubble plot > plot_bubble_for_genes_per_clusterble plot B. One ora.cluster.type: The clustering method name used to identify and assign the cell clusters.b.gene_list: A vector of one or more gene names to plot.c.show.percent: If set to TRUE, the percentage of expressing cells for respective genes in each cluster are displayed on the dotplot. Default is FALSE.The following parameters are required:60.Visualize identified marker genes for each cluster using heatmap. One or more individual genes can be plotted using a heatmap with the \u2018plot_heatmap_for_marker_genes\u2019 function. Each gene is represented on each row of the output.> plot_heatmap_for_marker_genesa.cluster.type: The name of the clustering method used to identify and assign the cell clusters.b.n.TopGenes: The number of top genes for each cluster to plot. Default value is 5.The following parameters are required:61.Export top marker genes for each cluster. The top marker genes with statistical analysis results can be exported to the text file format.> export_marker_genes_to_tablea.cluster.type: The clustering method name used to identify and assign the cell clusters.b.n.TopGenes: The number of top genes for each cluster. Default value is 5.c.min.log2FC: The log2FC value, above which, genes will be included. Default value is 0.5.d.min.expFraction: The fraction of expressing cells, above which, genes will be included. Default value is 0.3.e.write.to.file: The file path to be exported.The following parameters are required:Timing: 2\u20133 h62.Load SingCellaR and required R packages. Here, the user can load the integrated R object saved from step 52.> library(SingCellaR)> library(AUCell)> library(ggplot2)> library(DAseq)> source('./utilis.R')63.Load the integrated R object generated from step 52.> load(file\u00a0= \"./Human_HSPC_All.SingCellaR.rdata\")64.Build AUCell gene rankings. The user will have to create the ranked gene list using the function \u2018AUCell_buildRankings\u2019 implemented in AUCell package.> set.seed(2021)> exprMatrix <- get_umi_count(Human_HSPC)> human_HSPCs_cells_rankings <- AUCell_buildRankingsa.exprMat: The raw expression count matrix. This can be retrieved from the SingCellaR object using the \u2018get_umi_count\u2019 function.b.nCores: The number of cores to use for parallel processing. The maximum number of cores is dependent on the user\u2019s device. Default value is 1.c.plotStats: If set to TRUE (default), the expression statistics will be summarized and plotted in the histogram and boxplots.Note: This step may be time-consuming. The user is advised to save the output of this step using the following code:> saveThe following parameters are required:65.Calculate AUCell scores. AUCell scores for each cell will be computed using the ranked genes from the previous step for the provided hematopoietic gene sets.> set.seed(2021)> human_HSPCs.AUCells.score <- Run_AUCellOptional: The user is advised to save the AUCell scores for further analysis.> save66.Visualize AUCell scores. The user can visualize AUCell scores for a given gene signature on specific clusters on the UMAP embedding. Here, we will use HSC/MPP gene signature on cluster 1 as an example example A. We obsa.AUCell_gene_set_name: The name of the gene signature to plot. The signature name specified here must be the same as the gene signature name in the GMT file provided in step 65.b.AUCell_score: The R object created from computing AUCell scores in step 65.c.selected.limited.clusters: Cells in selected cluster IDs will be displayed with the AUCell scores.d.IsLimitedAUCscoreByClusters: If set to TRUE, the AUCell scores will only be displayed for selected clusters as specified using the \u2018selected.limited.clusters\u2019 argument. Default is FALSE.e.AUCell_cutoff: The AUCell score threshold, above which, the scores will be displayed. The higher the score threshold, the more stringent the threshold. Default is 0.Note: AUCell cutoff score is arbitrary. To explore the AUCell cutoff score for each gene signature, the user can plot the score distribution using ggplot2 and manually explore the suitable threshold.The following parameters are required:67.Differential abundance testing. We use DAseq ,\u00a0\u00a0\u00a0\u00a0labels.2\u00a0= c,\u00a0\u00a0\u00a0\u00a0path\u00a0= \"./\",\u00a0\u00a0\u00a0\u00a0outputname\u00a0= \"eFL_vs_FL.pdf\")se DAseq to perfose DAseq B as an ea.groupA and groupB: The group names for integrated samples or an individual sample name.b.labels.1 and labels.2: The sample IDs of groupA and groupB, respectively.c.path: The folder path to save the output file.d.outputname: The output file name.The following parameters are required:The user can calculate the enrichment of specific gene sets assigned for an individual cell. We incorporated AUCell score analysis into SinTiming: 1\u20131.5 hFrom the annotation of cell types assigned in the cell type annotation steps, the user can further investigate the direction of cellular differentiation trajectories. The user can trace immature to more differentiated cell populations and understand the functional relationship between different cell populations in terms of cellular maturity. It is noteworthy that current trajectory analysis requires the user to identify the starting point (immature cell state), in this case, HSC/MPP. Hence, the cell annotations inferred from previous steps will be helpful to infer the primitive cell clusters. SingCellaR supports two approaches of trajectory analysis, namely force-directed graph (FDG) and diff68.Load SingCellaR package.> library(SingCellaR)69.Load the integrated R object generated from step 52.> load(file\u00a0= \"./Human_HSPC_All.SingCellaR.rdata\")70.Run force-directed graph analysis. The user can use the \u2018runFA2_ForceDirectedGraph\u2019 function to build the force-directed graph layout (embeddings). The layout can be annotated using various features, including cell lineage signature genes. Here, we use the Supervised harmony embeddings to generate force-directed graph layout.> runFA2_ForceDirectedGrapha.useIntegrativeEmbeddings: If set to TRUE, the data integration or batch correction embeddings will be used in conjunction with \u2018integrative_method\u2019 argument. Default is FALSE.b.integrative_method: The data integration or batch correction method name.c.knn.metric: The distance metric.d.n.dims.use: The number of PCs from \u2018integrative_method\u2019. If \u2018useIntegrativeEmbedding\u2019 is set to FALSE, the PCA analysis result is used. Default value is 30.e.n.neighbors: The number of neighboring cells.f.n.seed: The random number generator. Default value is 1.g.fa2_n_iter: The number of iterations for analyzing the \u2018networkx\u2019 graph. Default value is 1,000.The following parameters are required:71.> plot_forceDirectedGraph_label_by_clustersVisualize trajectories by Louvain clustering A.> plot_a.show_method: The clustering method name.The following parameter is required:72.> plot_forceDirectedGraph_label_by_multiple_gene_sets,\u00a0\u00a0custom_color\u00a0= c,\u00a0\u00a0isNormalizedByHouseKeeping\u00a0= F,\u00a0edge.size=0,\u00a0edge.color\u00a0= \"#FFFFFF\",\u00a0vertex.size\u00a0= 0.2,\u00a0showEdge\u00a0= F,\u00a0showLegend\u00a0= T)Visualize trajectories by using multiple lineages gene sets B.> plot_a.gmt.file: Path to the file containing the gene signatures in GMT format.b.show_gene_sets: The vector of gene signature names to show on the plot. The names must be the same names as found in the \u2018gmt.file\u2019.c.custom_color: The assigned colors for gene signatures in \u2018show_gene_set\u2019.d.isNormalizedByHouseKeeping: When set to TRUE (default), the gene expression values of each gene signature specified will be normalized by the housekeeping genes. The housekeeping genes are defined as the top 100 genes with the highest total gene expression values across all cells.e.edge.size: The size of the edges connecting the nodes. Default value is 0.2.f.edge.color: The color of the edges. Default color is gray.g.vertex.size: The size of the nodes. Default value is 1.5.h.showEdge: When set to TRUE (default), the edges will be displayed.i.showLegend: When set to TRUE (default), the legend will be displayed.The following parameters are required:73.Load SingCellaR and destiny R packages.> library(SingCellaR)> library(destiny)74.Load the integrated R object generated from step 52.> load(file\u00a0= \"./Human_HSPC_All.SingCellaR.rdata\")75.Run diffusion map analysis. The user can use the \u2018runDiffusionMap\u2019 function to generate the diffusion map layout (embeddings). The layout can be annotated using various features, including cell lineage signature genes. We will use the Supervised harmony embeddings to generate the diffusion map layout.> runDiffusionMapa.useIntegrativeEmbeddings: If set to TRUE, the data integration or batch correction embeddings will be used in conjunction with \u2018integrative_method\u2019 argument. Default is FALSE.b.integrative_method: The data integration or batch correction method name.c.n.dims.use: The number of PCs from \u2018integrative_method\u2019. If \u2018useIntegrativeEmbedding\u2019 is set to FALSE, the PCA result will be used. Default value is 30.d.n.seed: The random number generator. Default value is 1.The following parameters are required:76.> plot_diffusionmap_label_by_clustersVisualize trajectories by Louvain clustering C.> plot_a.show_method: The clustering method name.The following parameter is required:77.> plot_diffusionmap_label_by_multiple_gene_sets,\u00a0\u00a0custom_color\u00a0= c,\u00a0\u00a0isNormalizedByHouseKeeping\u00a0= F)Visualize trajectories by multiple lineages genes D.> plot_a.gmt.file: Path to the file containing the gene signatures in GMT format.b.show_gene_sets: The vector names of gene signatures to show in the plot. The names must be the same names as found in the gmt.file.c.custom_color: The assigned colors for gene signatures in \u2018show_gene_set\u2019.d.isNormalizedByHouseKeeping: When set to TRUE (default), the gene expression values of the individual genes of each gene signature specified will be normalized by the housekeeping genes. The housekeeping genes are defined as the top 100 genes with the highest total gene expression values across all cells.The following parameters are required:78.Load SingCellaR and required R packages.> library(SingCellaR)> library(monocle3)> library(ggplot2)> library(ComplexHeatmap)> library(circlize)> library(RColorBrewer)> source('./utilis.R')79.Load the integrated R object generated from step 52.> load(file\u00a0= \"./Human_HSPC_All.SingCellaR.rdata\")80.Prepare input files for Monocle3. The required objects include the expression matrix of raw counts, cell cluster metadata, and gene metadata.# Expression matrix> cells.used <- Human_HSPC@sc.clusters$Cell> umi <- get_umi_count(Human_HSPC)> used.umi <- umi> expression_matrix <- used.umi> dim(expression_matrix) # check the dimension of object# Cell cluster metadata> cell_metadata <- Human_HSPC@sc.clusters> rownames(cell_metadata) <- cell_metadata$Cell# Gene metadata> gene_annotation <- as.data.frame(rownames(used.umi))> colnames(gene_annotation) <- \"gene_short_name\"> rownames(gene_annotation) <- gene_annotation$gene_short_name81.Create Monocle3 object.> cds <- new_cell_data_set82.Integrate Monocle3 and SingCellaR results. Monocle3 normalizes the raw gene counts, and then performs PCA. The user can run the default workflow as suggested by Monocle3 tutorial. In this step, we will replace Monocle3\u2019s UMAP embeddings and add cluster information derived from the SingCellaR object.# Pre-process Monocle3 object> cds <- preprocess_cds> cds <- align_cds(cds)# Substitute Monocle3's embeddings with SingCellaR's embeddings> embeddings <-\u00a0>Human_HSPC@SupervisedHarmony.embeddings> cds@int_colData@listData$reducedDims$Aligned <- embeddings# Nonlinear dimension reduction> cds <- reduce_dimension# Identify and assign clusters> cds <- cluster_cells# Substitute Monocle3's UMAP embeddings with SingCellaR's embedding> newcds<- cds # change monocle3 objects name> SingCellaR.umap <-,c]> Human_HSPC@umap.result> rownames(umap) <- umap$Cell> umap$Cell <- NULL> newcds@int_colData$reducedDims$UMAP <- umap# Substitute Monocle3's cluster identity with SingCellaR's cluster identity> anno.clusters <- Human_HSPC@sc.clusters$louvain_cluster> names(anno.clusters) <- Human_HSPC@sc.clusters$Cell> newcds@clusters$UMAP$clusters <- anno.clusters83.Generate trajectory graph and order cells by pseudotime. To learn the cell differentiation trajectories, the user will use the \u2018learn_graph\u2019 function provided by Monocle3. By default, Monocle3 uses a 'self-defined' node to perform the pseudotime analysis. Thus, the user will need to define the root node, i.e., the most immature cluster. To identify the root node, the user can use the \u2018get_earliest_principal_node\u2019 function. Based on the previous analyses, the user can select \u2018cl1\u2019, the HSC/MPP cluster, as the starting point of the trajectory.> newcds <- learn_graph(newcds)# Apply function to retrieve root node> root.nodes <- get_earliest_principal_node# Order cells by pseudotime relative to root node> newcds <- order_cells# Save R object> save84.Visualize trajectory paths on UMAP 85.Visualize pseudotime on UMAP 86.Visualize pseudotime on SingCellaR FDG ,]> fa2\u00a0<- Human_HSPC@fa2_graph.layout> fa2.used <- fa2# Extract the pseudotime information> new_data <- data.frame)> new_data$Cell <- rownames(new_data)> new_data <- new_data# Integrate pseudotime with FDG embeddings> fa2.used <- fa2.used> colnames(fa2.used) <- c> fa2.dat <- cbind# Plot FDG> ggplot)\u00a0+\u00a0geom_point)\u00a0+\u00a0scale_color_viridis_c+\u00a0theme_classic\u00a0+\u00a0xlab(\"FDG1\")\u00a0+\u00a0ylab(\"FDG2\")87.Visualize the expression of selected genes along the paths. We plot erythroid lineage genes as the example.a.# Retrieve UMAP coordinates and annotate with cluster information> sc.clusters <-Human_HSPC@sc.clusters> umap.results <- Human_HSPC@umap.result> umap.results <- merge### Add developmental stage information> umap.results$stage[umap.results$sampleID %in% c]<- \"eFL\"> umap.results$stage[umap.results$sampleID %in% c(\"1_ABM_1\")]<- \"ABM\"> umap.results$stage[umap.results$sampleID %in% c]<- \"FL\"> umap.results$stage[umap.results$sampleID %in% c]<- \"FBM\"> umap.results$stage[umap.results$sampleID %in% c]<- \"PBM\"Add developmental stages information to the metadata.b.> Ery.path <- cDefine the path for the erythroid lineage based on the FDG, diffusion map, and Monocle3. We selected the path \u2018cl1-cl7-cl12-cl3\u2019 for the erythroid lineage.c.> umap.results.Ery <- umap.results> cells.eFL <- umap.results.Ery$Cell[umap.results.Ery$stage\u00a0== \"eFL\"]> cells.FL <- umap.results.Ery$Cell[umap.results.Ery$stage\u00a0== \"FL\"]> cells.FBM <- umap.results.Ery$Cell[umap.results.Ery$stage\u00a0== \"FBM\"]> cells.PBM <- umap.results.Ery$Cell[umap.results.Ery$stage\u00a0== \"PBM\"]> cells.ABM <- umap.results.Ery$Cell[umap.results.Ery$stage\u00a0== \"ABM\"]Extract cells from the erythroid trajectory for all stages.d.> genes.E <- c> matrix <- newcds@assays@data$counts> pt.matrix<-\u00a0matrix),order(pseudotime(newcds))]Extract genes known to be involved in the erythroid trajectory based on the pseudotime.e.> pt.matrix.eFL <- ExtractMatrix> pt.matrix.FL <- ExtractMatrix> pt.matrix.FBM <- ExtractMatrix> pt.matrix.PBM <- ExtractMatrix> pt.matrix.ABM <- ExtractMatrixExtract gene expression matrix for each group of cells.f.> ht1\u00a0<- plot_development_heatmap> ht2\u00a0<- plot_development_heatmap> ht3\u00a0<- plot_development_heatmap> ht4\u00a0<- plot_development_heatmap> ht5\u00a0<- plot_development_heatmap> ht.full <- ht1+ht2+ht3+ht4+ht5> ht.fullPlot gene expression heatmap along the path of the different developmental stages H.> ht1\u00a0 Ery.eFL <- ExtractCells(selected.cells\u00a0= cells.eFL)> Ery.FL <- ExtractCells(selected.cells\u00a0= cells.FL)> Ery.FBM <- ExtractCells(selected.cells\u00a0= cells.FBM)> Ery.PBM <- ExtractCells(selected.cells\u00a0= cells.PBM)> Ery.ABM <- ExtractCells(selected.cells\u00a0= cells.ABM)> matrix <- newcds@assays@data$counts> matrix.total <-Matrix::colSums(matrix)> norm.matrix <-(t(t(matrix)/matrix.total))\u221710000> expr.eFL <- norm.matrix> expr.eFL <- reshape2::melt(as.matrix(expr.eFL))> colnames(expr.eFL) <- c> expr.eFL$Stage <- \"eFL\"> expr.FL <- norm.matrix> expr.FL <- reshape2::melt(as.matrix(expr.FL))> colnames(expr.FL) <- c> expr.FL$Stage <- \"FL\"> expr.FBM <- norm.matrix> expr.FBM <- reshape2::melt(as.matrix(expr.FBM))> colnames(expr.FBM) <- c> expr.FBM$Stage <- \"FBM\"> expr.PaedBM <- norm.matrix> expr.PaedBM <- reshape2::melt(as.matrix(expr.PaedBM))> colnames(expr.PaedBM) <- c> expr.PaedBM$Stage <- \"PBM\"> expr.AdultBM <- norm.matrix> expr.AdultBM <- reshape2::melt(as.matrix(expr.AdultBM))> colnames(expr.AdultBM) <- c> expr.AdultBM$Stage <- \"ABM\"> expr.Ery <- rbindExtract gene expression from downsampled cells along the path from different developmental stages and pseudotime from \u2018newcds\u2019 object from step 83.h.> pseudotime <- as.data.frame(pseudotime(newcds))> colnames(pseudotime) <- \"pseudotime\"> pseudotime$Cell <- rownames(pseudotime)> pseudotime$pseudotime[pseudotime$pseudotime %in% \"Inf\"] <- 0> pseudotime <- pseudotimeExtract the pseudotime information from Monocle3 results.i.> expr.Ery <- mergeMerge gene expression data with pseudotime analysis results.j.> plot_genes> plot_genesVisualize selected erythroid gene expression along the path I.> plot_The step-by-step protocols describe an analysis pipeline used in a recent publication . Here, wSingCellaR requires signature gene sets to perform the cell type annotation analysis. Thus, the user would have to compile and curate customized gene sets for the relevant system of interest. In this protocol, we provide 75 gene sets curated from previous studies relevant to hematopoiesis. SingCellaR still lacks 'automatic object transformation' to interact with other existing packages, such as Seurat. However, SingCellaR uses the SingleCellExperiment object, the standard object for storing single-cell experimental data in R. Therefore, the gene expression matrix and cell metadata can be extracted simply from the SingleCellExperiment object. This issue will be improved when SingCellaR is updated to the next version to support more interactions with other packages and ensure compatibility with relevant R packages incorporated in SingCellaR.The user may encounter the following error when executing the function \u2018runFA2_ForceDirectedGraph\u2019:Error in runFA2_ForceDirectedGraphThere are two potentially problems: the fa2 package is not installed; and the R environmental path for FA2 module in python is not found by R.FA2 installation.The fa2 package can be installed as suggested in the Note: The fa2 package is not compatible with python 3.9 or higher versions.Conda environment is recommended by using \u2018conda_create\u2019 function after loading the reticulate package.python version configuration.### Set the python version into R environment.use_python(\"/miniconda3/envs/r-reticulate/bin/python\")Note: The user must change the python path as shown here to Conda-specific path found in the user\u2019s computer.### Open \u2018\u223c/.Renviron\u2019 in the terminal and add the following code:RETICULATE_PATH=\"/miniconda3/envs/r-reticulate/bin/python\"### Restart R session and then use below code to check:Sys.which(\"python\")\"/miniconda3/envs/r-reticulate/bin/python\" should be shown on the console.The user can use the following code to configure the python path:https://rstudio.github.io/reticulate/articles/versions.html.The user can refer to Python version configuration tutorial in the reticulate R package found in this URL: The user may find this error when performing the runScanorama(Human_HSPC) function:\u201cError in runScanorama(Human_HSPC) : The scanorama python module is not installed!. Please install using pip \u201dThe scanorama package can be installed as suggested in the SingCellaR installation section. The Python environment configuration can be found in Problem 1.UMAP/FDG plots may show different rotations from this protocol. This is caused by different software versions for generating plots and the seed number setting.This can be solved by setting a seed number (n.seed parameter) found in runUMAP and runFA2_ForceDirectedGraph functions.The user may encounter running time and memory issues when performing the AUCell_buildRankings function for a large-scale dataset.The user can use the alternative function named \u2018Build_AUCell_Rankings_Fast\u2019 to speed up running time and use less memory for ranking gene expression for each cell.The kBET score is used to benchmark the integration results from different integrative methods. The user may encounter slightly different kBET scores from this protocol. This is due to the different seed number settings and the number of subsampling cells for kBET analysis. More running time and memory will be used if the user sets the high number of the downsample size in kBET analysis.This issue can be solved by setting the seed number prior to running kBET using the \u2018set.seed\u2019 function. The user can subsample and fix the number of cells for the kBET analysis using the n.sample parameter described in the \u2018runKBET\u2019 function.supat.thongjuea@imm.ox.ac.uk).Further information and requests for resources and reagents should be directed to and will be fulfilled by the lead contact, Supat Thongjuea (This study did not generate new unique reagents."} +{"text": "Scientific Reportshttps://doi.org/10.1038/s41598-021-95102-7, published online 04 August 2021Correction to: The Code availability section in the original version of this Article was incomplete.\u201cCodes will be uploaded soon to [Insert URL].\u201dnow reads:https://github.com/kyuchoi/graph_neural_network_suicide_prediction\u201d\u201cThe codes are available at\u00a0The original Article has been corrected."} +{"text": "Scientific Reports 10.1038/s41598-021-02943-3, published online 08 December 2021Correction to: The original version of this Article contained an error in the link in the Code Availability section where,https://github.com/sblabbioinformatics/BG4_CUT_and_Tag_sc.\u201d\u201cScripts and general codes for G4-CUT&Tag data analysis are available at now reads:https://github.com/sblab-bioinformatics/BG4_CUT_and_Tag_sc.\u201d\u201cScripts and general codes for G4-CUT&Tag data analysis are available at The original Article has been corrected."} +{"text": "This article describes a method for creating applications for cluster computing systems using the parallel BSF-skeleton based on the original BSF (Bulk Synchronous Farm) model of parallel computations developed by the author earlier. This model uses the master/slave paradigm. The main advantage of the BSF model is that it allows to estimate the scalability of a parallel algorithm before its implementation. Another important feature of the BSF model is the representation of problem data in the form of lists that greatly simplifies the logic of building applications. The BSF-skeleton is designed for creating parallel programs in C++ using the MPI library. The scope of the BSF-skeleton is iterative numerical algorithms of high computational complexity. The BSF-skeleton has the following distinctive features.\u2022 The BSF-skeleton completely encapsulates all aspects that are associated with parallelizing a program.\u2022 The BSF-skeleton allows error-free compilation at all stages of application development.\u2022 The BSF-skeleton supports OpenMP programming model and workflows. Specifications Tablefarm skeleton based on the master/slave paradigm. The farm skeleton and the master/slave paradigm are discussed in a large number of papers model of parallel computations Map and Reducef to each element of list i denotes the iteration number; i-th approximation ; A is the list of elements of a certain set x is the current approximation) that maps the set B is a list of elements of the set A; A parallel skeleton is a programming construct, which abstracts a pattern of parallel computation and interaction i. Step 3 calculates the list B by applying the higher-order function Compute that calculates the next approximation s of the higher-order function Reduce. Step 6 increases the iteration counter i by one. Step 7 checks a termination criteria by invocating the Boolean user function StopCond, which takes two parameters: the new approximation StopCond returns true, the algorithm outputs Step 1 reads input data of the problem and an initial approximation. Step 2 assigns the zero value to the iteration counter A into K sublists of equal length . In the steps 3 and 4, the master process is idle. In Step 5, all worker processes send the partial foldings Reduce over the list of partial foldings Compute that calculates the next approximation; checks the termination criteria by using the Boolean user function StopCond and assigns its result to the Boolean variable exit. In the steps 6-9, the worker processes are idle. In Step 10, the master process sends the exit value to all worker processes. If the exit value is false, the master process and worker processes go to the next iteration, otherwise the master processes outputs the result and the computation stops. Note that, in the Steps 2 and 10, all processes perform the implicit global synchronization.The result is the parallel BSF\u201d prefix contain problem-independent code and are not subject to changes by the user; files with the \u201cProblem\u201d prefix are intended for filling in problem-dependent parts of the program by the user. Descriptions of all source code files are given in The BSF-skeleton is a compilable but not executable set of files. This set is divided into two groups: files with the \u201cinclude is shown in The dependency graph of the source code files by the directive #Problem-bsfParameters.h. They are used in the BSF-Code.cpp and should be set by the user. All these parameters are presented in The BSF-skeleton parameters are declared as macroses in the file Problem-bsfTypes.h. They are used in the BSF-Code.cpp and should be set by the user. All these types are presented in The predefined problem-depended BSF types are declared as data structures in the file reduceCounter. This extended reduce-list is presented by the pointer BD_extendedReduceList declared in the BSF-Data.h. When performing the Reduce function (see BC_ProcessExtendedReduceList in Section \u201cKey problem-independent functions (prefix BC_)\u201d), the elements that have this field equal to zero are ignored. For elements where reduceCounter is not zero, the values of the reduceCounter are added together. By default, the function BC_WorkerMap (see Section \u201cKey problem-independent functions (prefix BC_)\u201d) sets the reduceCounter to 1. The user can set the value of this field to 0 by setting the parameter *success of the function PC_bsf_MapF to 0.The BSF-skeleton appends to each element of the reduce-list the additional integer field called BSF-SkeletonVariables.h. The user can exploit these variables for the sake of debugging, tracing, and non-standard implementing . The user should not change the values of these variables. All skeleton variables are presented in The skeleton variables are declared in the file The skeleton functions are divided into two groups:BC_ that have implemented in the file BSF-Code.cpp; problem-dependent functions (predefined BSF functions) with the prefix PC_bsf_ that have declared in the file Problem-Code.cpp.1) problem-independent functions with the prefix BC_. The user also cannot change function headers with the prefix PC_bsf_ but must write an implementation of these functions. The body of a predefined BSF function cannot include calls of problem-independent functions with the prefix BC_. The hierarchy of the key function calls is presented in The user cannot change the headers and bodies of the functions with the prefix BSF-Code.cpp. Descriptions of some key problem-independent functions are presented in The implementations of all problem-independent functions can be found in the file PC_bsf_ declared in Problem-bsfCode.cpp. The user must implement all these functions. An instruction is presented in Section \u201cStep-by-step instruction\u201d. An example is presented in Section \u201cExample of using the BSF-skeleton\u201d.This section contains detailed descriptions of the predefined problem-dependent BSF functions with the prefix PC_bsf_CopyParameterPT_bsf_parameter_T (see Section \u201cError! Reference source not found.\u201d).Copies all order parameters from the in-structure to the out-structure. The order parameters are declared in the predefined problem-depended BSF type Syntaxvoid PC_bsf_CopyParameter;In parameters parameterInThe structure from which parameters are copied.Out parameters parameterOutPThe pointer to the structure to which parameters are copied.PC_bsf_InitProblem-Data.h.Initializes the problem-depended variables and data structures defined in Syntaxvoid PC_bsf_Init(bool* success);Out parameters*successfalse if the initialization failed. The default value is true.Must be set to PC_bsf_IterOutputOutputs intermediate results of the current iteration.Syntaxvoid PC_bsf_IterOutput;void PC_bsf_IterOutput_1;PC_bsf_IterOutput_2;void PC_bsf_IterOutput_3;In parameters reduceResultReduce function. reduceCounterPointer to the structure that contains the result of executing the reduceCounter (see Section \u201cExtended reduce-list\u201d).The number of summed .The functions PC_bsf_JobDispatcherThis function is used to organize the workflow (see Section \u201cWorkflow support\u201d) and is executed by the master process before starting each iteration. It implements a state machine that switches from one state to another. If you do not need the workflow support, then you should use the empty implementation of this function.Syntaxvoid PC_bsf_JobDispatcher;In|out parameters parameterPC_bsf_ProcessResults_1, PC_bsf_ProcessResults_2 and PC_bsf_ProcessResults_3.The pointer to the structure containing the parameters of the next iteration. This structure may be also modified by the functions Out parameters*jobThis variable must be assigned the number of the next action (job).*exittrue. The default value is false.If the stop condition holds, then this variable must be assigned RemarksBSF_sv_parameter is not allowed in the implementation of this function.Important: The use of the structure PC_bsf_JobDispatcher is invocated after the invocation of function PC_bsf_ProcessResults_1, PC_bsf_ProcessResults_2 or PC_bsf_ProcessResults_3.The function PC_bsf_MapFMap. To implement the PC_bsf_MapF function, we can use the problem-dependent variables and data structures defined in the file Problem-Data.h, and the structure BSF_sv_parameter of the type PT_bsf_parameter_T defined in Problem-bsfTypes.h.Implements the function that is applied to the map-list elements when performing the higher-order function Syntaxvoid PC_bsf_MapF;void PC_bsf_MapF_1;void PC_bsf_MapF_2;void PC_bsf_MapF_3;In parameters mapElemThe pointer to the structure that is the current element of the map-list.Out parameters reduceElemThe pointer to the structure that is the corresponding reduce-list element to be calculated.*successfalse if the corresponding reduce-list element must be ignored when the Reduce function will be executed. The default value is true.Must be set to RemarksPC_bsf_MapF_1, PC_bsf_MapF_2 and PC_bsf_MapF_3 are used to organize a workflow .The functions PC_bsf_ParametersOutputOutputs parameters of the problem before starting the iterative process.Syntaxvoid PC_bsf_ParametersOutput(PT_bsf_parameter_T parameter);In parameters parameterThe structure containing the parameters of the problem.PC_bsf_ProblemOutputOutputs the results of solving the problem.Syntaxvoid PC_bsf_ProblemOutput;void PC_bsf_ProblemOutput_1;void PC_bsf_ProblemOutput_2;void PC_bsf_ProblemOutput_3;In parameters reduceResultReduce. parameterThe pointer to the structure that is the result of executing the higher-order function The structure containing the parameters of the final iteration.RemarksThe functions PC_bsf_ProblemOutput_1, PC_bsf_ProblemOutput_2 and PC_bsf_ProblemOutput_3 are used to organize a workflow .PC_bsf_ProcessResultsProcesses the results of the current iteration: computes the order parameters for the next iteration and checks the stop condition.Syntaxvoid PC_bsf_ProcessResults;void PC_bsf_ProcessResults_1;void PC_bsf_ProcessResults_2;void PC_bsf_ProcessResults_3;In parameters reduceResultReduce. reduceCounterThe pointer to the structure that is the result of executing the higher-order function reduceCounter (see Section \u201cExtended reduce-list\u201d).The number of summed , then this variable must be assigned the number of the next action (job). Otherwise, this parameter is not used.*exittrue. The default value is false.If the stop condition holds, then this variable must be assigned RemarksBSF_sv_parameter is not allowed in the implementations of these functions.Important: The use of the structure PC_bsf_ProcessResults_1, PC_bsf_ProcessResults_2 and PC_bsf_ProcessResults_3 are used to organize a workflow .The functions PC_bsf_ReduceFImplements the operation Syntaxvoid PC_bsf_ReduceF;void PC_bsf_ReduceF_1;void PC_bsf_ReduceF_2;void PC_bsf_ReduceF_3;In parameters xThe pointer to the structure that presents the first term. yThe pointer to the structure that presents the second term.Out parameters zThe pointer to the structure that presents the result of the operation.RemarksPC_bsf_ReduceF_1, PC_bsf_ReduceF_2 and PC_bsf_ReduceF_3 are used to organize a workflow .The functions PC_bsf_SetInitParameterPT_bsf_parameter_T (see Section \u201cError! Reference source not found.\u201d).Sets initial order parameters for the workers in the first iteration. These order parameters are declared in the predefined problem-depended BSF type Syntaxvoid PC_bsf_SetInitParameter(PT_bsf_parameter_T* parameter);Out parameters parameterThe pointer to the structure that the initial parameters should be assigned to.PC_bsf_SetListSizeSets the length of the list.Syntaxvoid PC_bsf_SetListSize(int* listSize);Out parameters*listSizeMust be assigned a positive integer that specifies the length of the list.RemarksThe list size should be greater than or equal to the number of workers.PC_bsf_SetMapListElemi.Initializes the map-list element with the number Syntaxvoid PC_bsf_SetMapListElem;In parameterselemThe pointer to the map-list element.iThe ordinal number of the specified element.RemarksImportant: The numbering of elements in the list begins from zero.PC_bsfAssignAddressOffsetBSF_sv_addressOffset (see Section \u201cSkeleton variables\u201d).Assigns the number of the first element of the map-sublist to the skeleton variables Syntax void PC_bsfAssignAddressOffset;In parameters valueNon-negative integer value.RemarksImportant: The user should not use this function.PC_bsfAssignIterCounterBSF_sv_iterCounter (see Section \u201cSkeleton variables\u201d).Assigns the number of the first element of the map-sublist to the skeleton variables Syntax void PC_bsfAssignIterCounter;In parameters valueNon-negative integer value.RemarksImportant: The user should not use this function.PC_bsfAssignJobCaseBSF_sv_jobCase (see Section \u201cSkeleton variables\u201d).Assigns the number of the current activity (job) in workflow to the skeleton variables Syntax void PC_bsfAssignJobCase;In parameters valueNon-negative integer value.RemarksImportant: The user should not use this function.PC_bsfAssignMpiMasterBSF_sv_mpiMaster (see Section \u201cSkeleton variables\u201d).Assigns the rank of the master MPI process to the skeleton variables Syntax void PC_bsfAssignMpiMaster;In parameters valueNon-negative integer value.RemarksImportant: The user should not use this function.PC_bsfAssignMpiRankBSF_sv_mpiRank (see Section \u201cSkeleton variables\u201d).Assigns the rank of current MPI process to the skeleton variables Syntax void PC_bsfAssignMpiRank;In parameters valueNon-negative integer value.RemarksImportant: The user should not use this function.PC_bsfAssignNumberInSublistBSF_sv_numberInSublist (see Section \u201cSkeleton variables\u201d).Assigns the number of the current element in the map-sublist to the skeleton variables Syntax void PC_bsfAssignNumberInSublist;In parameters valueNon-negative integer value.RemarksImportant: The user should not use this function.PC_bsfAssignNumOfWorkersBSF_sv_numOfWorkers (see Section \u201cSkeleton variables\u201d).Assigns the total number of the worker processes to the skeleton variables Syntax void PC_bsfAssignNumOfWorkers;In parameters valueNon-negative integer value.RemarksImportant: The user should not use this function.PC_bsfAssignParameterBSF_sv_parameter (see Section \u201cSkeleton variables\u201d).Assigns the order parameters to the structure Syntax void PC_bsfAssignParameter(PT_bsf_parameter_T parameter);In parameters parameterThe structure from which the order parameters are taken.RemarksImportant: The user should not use this function.PC_bsfAssignSublistLengthBSF_sv_sublistLength (see Section \u201cSkeleton variables\u201d).Assigns the length of the current map-sublist to the skeleton variables Syntax void PC_bsfAssignSublistLength;In parameters valueNon-negative integer value.RemarksImportant: The user should not use this function.Step-by-step instructionThis section contains step-by-step instructions on how to use the BSF-skeleton to quickly create a parallel program. Starting from Step 2, we strongly recommend compiling the program after adding each language construction.Step 1. First of all, we must represent our algorithm in the form of operations on lists using the higher-order functions Map and Reduce . For example: typedef PT_point_T[PP_N]; // Point in n-Dimensional SpaceStep 4. In the file Problem-bsfTypes.h, implement the predefined BSF types. If we do not use a workflow then we do not have to implement the types PT_bsf_reduceElem_T_1, PT_bsf_reduceElem_T_2, PT_bsf_reduceElem_T_3, but we can't delete these empty structures. For example:struct PT_bsf_parameter_T {PT_point_T approximation; // Current approximation};struct PT_bsf_mapElem_T {int columnNo; // Column number in matrix Alpha};struct PT_bsf_reduceElem_T {double column[PP_N]; // Column of intermediate matrix};struct PT_bsf_reduceElem_T_1 {};struct PT_bsf_reduceElem_T_2 {};struct PT_bsf_reduceElem_T_3 {};Step 5. In the file Problem-Data.h, define the problem-dependent variables and data structures. For example: static double PD_A[PP_N][PP_N]; // Coefficients of equationsStep 6. In the file Problem-bsfCode.cpp, implement the predefined problem-dependent BSF functions (see Section \u201cPredefined problem-dependent BSF functions (prefix PC_bsf_)\u201d) in the suggested order. To implement these functions, the user can write additional problem (user) functions in the Problem-bsfCode.cpp. The prototypes of these problem functions must be included in the Problem-Forwards.h.Step 7. In the file Problem-bsfCode.cpp, we can configure the BSF-skeleton parameters (see Section \u201cBSF-skeleton parameters\u201d).Build and run the solution in the MPI environment.Jacobi methodIn this section, we show how to use the BSF-skeleton to implement the iterative Jacobi method as an example. The It is assumed that Let us define the vector Step 1. Step 2. Step 3. If Step 4. Step 5. Stop.diagonal dominance of the matrix In the Jacobi method, an arbitrary vector j-th column of matrix Let us represent the Jacobi method in the form of algorithm on lists. Let n. For any vector j-th column of the matrix C by the j-th coordinate of the vector x. The BSF-implementation of the Jacobi method presented as C entered in line 1 is implicitly used to calculate the values of the function Let https://github.com/leonid-sokolinsky/BSF-Jacobi. Additional examples of using the BSF-skeleton can be found on GitHub at the following links:\u2022https://github.com/leonid-sokolinsky/BSF-LPP-Generator;\u2022https://github.com/leonid-sokolinsky/BSF-LPP-Validator;\u2022https://github.com/leonid-sokolinsky/BSF-gravity;\u2022https://github.com/leonid-sokolinsky/BSF-Cimmino;\u2022https://github.com/leonid-sokolinsky/NSLP-Quest.The source code of the BSF-Jacobi algorithm, implemented by using the BSF-skeleton, is freely available on Github at Problem-bsfTypes.h. All jobs have the same type of map list elements. To organize the workflow, we need to follow these steps:The BSF-skeleton supports workflows. A workflow consists of orchestrated and repeatable activities (jobs). The BSF-skeleton supports up to four different jobs. The starting job is always numbered 0 (omitted in the source codes). The other jobs have sequential numbers 1, ..., 3. Each job has its own type of reduce-list elements defined in the file Problem-bsfParameters.h, redefine the macros PP_BSF_MAX_JOB_CASE specifying the largest number of a job. For example, if the total job quantity is 3, the number to be assigned to PP_BSF_MAX_JOB_CASE must be 2.In the file Problem-bsfTypes.h, define the types of reduce-list elements for all jobs whose sequential numbers are less than or equal to PP_BSF_MAX_JOB_CASE.In the file Problem-bsfCode.cpp, implement the functions PC_bsf_MapF[_*], PC_bsf_ReduceF[_*], PC_bsf_ProcessResults[_*], PC_bsf_ProblemOutput[_*] and PC_bsf_IterOutput[_*] for all jobs whose sequential numbers are less than or equal to PP_BSF_MAX_JOB_CASE. The functions PC_bsf_ProblemOutput[_*] should assign the parameter *nextJob a sequential number of the next job (possibly the same).In the file PC_bsd_JobDispatcher to manage these states. An example of a solution using the BSF-skeleton with the workflow support is freely available on Github at https://github.com/leonid-sokolinsky/Apex-methodIf the number of workflow states is greater than the number of jobs, you can use the function BC_WorkerMap) using the #pragma omp parallel for. This support is disabled by default. To enable this support, we must define the macros PP_SF_OMP in the file Problem-bsfParameters.h. Using the macros PP_BSF_NUM_THREADS, we can specify the number of threads to use in the parallel for. By default, all available threads are used.The BSF-skeleton supports a parallelization of the map-list processing cycle in the worker processes .Some numerical algorithms can be implemented naturally using the function https://github.com/leonid-sokolinsky/BSF-skeleton.Supplementary material: The source code of the BSF-skeleton is freely available on Github at The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper."} +{"text": "Present-day and ancient population genomic studies from different study organisms have rapidly become accessible to diverse research groups worldwide. Unfortunately, as datasets and analyses become more complex, researchers with less computational experience often miss their chance to analyze their own data. We introduce FrAnTK, a user-friendly toolkit for computation and visualization of allele frequency-based statistics in ancient and present-day genome variation datasets. We provide fast, memory-efficient tools that allow the user to go from sequencing data to complex exploratory analyses and visual representations with minimal data manipulation. Its simple usage and low computational requirements make FrAnTK ideal for users that are less familiar with computer programming carrying out large-scale population studies. Our wrapper scripts make it possible to run this kind of analysis with a single command and support average pairwise distances, 3f, 4f, \u201cbasic\u201d D and enhanced D-statistics. In addition, we include wrapper scripts for computing admixture/contamination-corrected 4f-statistics over a range of admixture/contamination proportions and minor allele count-stratified D-statistics over a specified range of minor allele counts.To supplement the scripts for computing single statistics, we provide multi-threaded wrappers for automated computation of multiple related statistics . A commoe.g., The wrapper scripts also provide automated plotting functionality, which allows users to create visual representations of classical exploratory analyses e.g., . For evepostmortem damage-related error , R and perl and depend only on standard tools and libraries that are commonly present in Unix-like setups in the field of population genomics: plink R librarf- and other site-statistics are available, e.g., popstats , which we distributed over 20 threads (Intel Xeon Gold 2.10\u2009GHz CPU). Using the qpDstat (v712) program from admixtools, the task was completed in 5.1\u2009h, with a peak memory usage of \u223c520\u2009GB (\u223c26\u2009GB per thread). Using the automated D-statistic wrapper in FrAnTK (autoDwfixed.R), the task was completed in 2.2\u2009h, with a peak memory usage of \u223c5.8\u2009GB (\u223c287\u2009MB per thread). We attribute these gains to two key features: (1) precomputing the allele frequencies to speed up subsequent computation and (2) processing one site at a time instead of loading the whole dataset onto memory. In this example, the admixtools-based approach would require the user to prepare a set of input files and distribute the parallel processes across different threads. Once all processes have run, the user would have to parse separate results and use a custom script for visualizing the results. By contrast, by using FrAnTK, the user can go from the initial data to a visual representation of the results by running two one-line commands.To assess the performance of our toolkit, we computed a number of statistics on a whole-genome reference dataset using FrAnTK and admixtools . We use the HGDP SNP array dataset to compute the 51 possible 3f-statistics of the form 3f, where X represents all the populations in the HGDP SNP array dataset. This run was completed in \u223c20\u2009s.Using the merged data, we can explore the broad genetic affinities of the low-depth Mal'ta genome. We use the frantk autof3wfixed \\ freqpref=HGDP_hg19_genotypes_f_WithBam h1\u2009=\u2009MalTa \\target=Yoruba catfile=HGDP_hg19_genotypes_cat \\legfile=HGDP_hg19_genotypes_leg nthr\u2009=\u200940autof3wfixed.R will output the plot shown in HGDP_hg19_genotypes_cat and HGDP_hg19_genotypes_leg files, which we supply through the catfile and legfile options. These results replicate the finding in (autoDwfixed.R wrapper (with 40 threads) to compute all possible D-statistics of the form D, where X represents all the populations in the HGDP dataset, including the Mal'ta individual. This run was completed in \u223c20\u2009s.frantk autoDwfixed \\ freqpref=HGDP_hg19_genotypes_f_WithBam \\ h1=Karitiana \\h2=Han h4=Yoruba \\ catfile=HGDP_hg19_genotypes_cat \\legfile=HGDP_hg19_genotypes_leg nthr=40autoDwfixed.R will output the plot shown in D deviated significantly from D\u2009=\u20090 (Z\u223c14.9). This pattern is most likely due to shared ancestry between the Mal'ta individual and present-day French . This procedure was completed in \u223c30\u2009s.We can use the t format . First, echo \"SanMbutiPygmyYorubaMandenkaPapuanMelanesianHanDaiFrenchItalianSardinianOrcadianMalTaKaritianaSurui\" > poi\u2003frantk Freqs2Treemix \\ freqpref=HGDP_hg19_genotypes_f_WithBam \\tmpref=hgdp_malta_tm popsofint=poiFinally, we run treemix following the parameters in #Compute the number of SNPs that should be included in each autosomal 5\u2009Mb-block.a=`zcat hgdp_malta_tm_ALL_tm.gz | wc -l`nsnps=`echo \"5000000/(2881033286/\"$a\")\u201d | bc`\u2003#Run treemix with 0 and 1 migrationstreemix -i hgdp_malta_tm_ALL_tm.gz -o tm_ALL_res_0mig -k \"$nsnps\" -noss \\-global -root San -m 0 -seed012345treemix -i hgdp_malta_tm_ALL_tm.gz -o tm_ALL_res_1mig -k \"$nsnps\" -noss \\-global -root San -m 1 -seed112345Treemix admixture graphs in FrAnTK is a toolkit that streamlines a set of common analyses that rely on allele frequency-based statistics, and makes them accessible to users that are less familiar with computer programming. We reduce memory and computing times by precomputing allele frequencies, thus allowing researchers to explore their own datasets with reduced computational resource requirements. Notably, the automated wrappers and plotting functionality in FrAnTK allow the user to carry out complex exploratory analyses and produce publication-ready visual representations with single-line commands and minimal data manipulation. Thus, we consider an appropriate protocol would comprise an initial exploration using the tools in FrAnTK, followed by the application of model-based strategies such as those implemented in qpWave and qpGraph (FrAnTK and its documentation are freely available in github.com/morenomayar/FrAnTK.G3 online.jkab357_Supplementary_DataClick here for additional data file."} +{"text": "Emerging evidence suggested that circular RNAs (circRNAs) play critical roles in cervical cancer (CC) progression. However, the roles and molecular mechanisms of hsa_circ_0007364 in the tumorigenesis of CC remain unclear. In the present study, we used bioinformatics analysis and a series of experimental analysis to characterize a novel circRNA, hsa_circ_0007364 was up-regulated and associated with advanced clinical features in CC patients. Hsa_circ_0007364 inhibition notably suppressed the proliferation and invasion abilities of CC cells in vitro and reduced tumor growth in vivo. Moreover, hsa_circ_0007364 was uncovered to sponge miR-101-5p. Additionally, methionine adenosyltransferase II alpha (MAT2A) was verified as a target gene of miR-101-5p, and its downregulation reversed the inhibitory effects of hsa_circ_0007364 knockdown on CC progression. Therefore, we suggested that hsa_circ_0007364 might serve as an oncogenic circRNA in CC progression by regulating the miR-101-5p/MAT2A axis, which provides a potential therapeutic target to the treatment.Research highlightshsa_circ_0007364 was upregulated in CChsa_circ_0007364 promoted CC cell progressionhsa_circ_0007364/miR-101-5p/MAT2A axis in CC In 2012 there were an estimated 530,000 CC cases and about 275,000 CC deaths . Thus, nin vitro. Hu et al. , h, hP <\u00a00.circRNAs ), and hscircRNAs ).Figure Assessment of hsa_circ_0007364 expression in CC revealed that hsa_circ_0007364 is remarkably upregulated in GSE102686 and GSE113696 datasets ,b). And 3.2.in vitro . CCK-8 in vitro ,g). More(in vivo ). Togeth3.3.Previous studies have shown that circRNAs can \u2018sponge\u2019 miRNAs in CC cells . To esta3.4.To elucidate the mechanism of miR-101-5p activity in CC cells, we predicted its targets using TargetScan, Starbase, miRTarBase and mircode and identified MAT2A as a possible target for miR-101-5p \u2013c). Dualin vitro . Similain vitro ,f). Take3.5.Next, we examined whether hsa_circ_0007364 functions as a molecular sponge against miR-101-5p to regulate MAT2A expression. RT-qPCR analysis revealed that hsa_circ_0007364 deficiency resulted in reduced MAT2A expression, while miR-101-5p suppression reversed this effect ,b). Resc4.Cervical cancer (CC) is one of the commonest malignancies globally . MountinMultiple studies show that circRNAs participate in tumorigenesis by sponging miRNAs ,29. PredMiRNAs modulate various cellular processes via their molecular targets . A searc5.In summary, we found that hsa_circ_0007364 is overexpressed in CC, and may enhance the proliferative and invasion capacity of CC cells by interacting with the miR-101-5p/MAT2A axis. Our findings highlight this axis as a potential therapeutic target against CC."} +{"text": "Germline Variants (GVs) are effective in predicting cancer risk and may be relevant in predicting patient outcomes. Here we provide a bioinformatic pipeline to identify GVs from the TCGA lower grade glioma cohort in Genomics Data Commons. We integrate paired whole exome sequences from normal and tumor samples and RNA sequences from tumor samples to determine a patient\u2019s GV status. We then identify the subset of GVs that are predictive of patient outcomes by Cox regression.For complete details on the use and execution of this protocol, please refer to \u2022Integration of whole-exome and RNA sequences to determine Germline Variants (GVs)\u2022Whole-exome and RNA sequences from tumors resolve low coverage issue in normal samples\u2022High correlation of GV allele frequencies between patient data and the GnomAD database\u2022GVs predict patient cancer outcome Germline Variants (GVs) are effective in predicting cancer risk and may be relevant in predicting patient outcomes. Here we provide a bioinformatic pipeline to identify GVs from the TCGA lower grade glioma cohort in Genomics Data Commons. We integrate paired whole exome sequences from normal and tumor samples and RNA sequences from tumor samples to determine a patient\u2019s GV status. We then identify the subset of GVs that are predictive of patient outcomes by Cox regression. Timing: 5\u00a0minThe scripts and test data sets required to run this protocol are available on the GitHub repositoryhttps://github.com/ds21uab/STAR_protocols_GV_calling.git.1.Open the Linux command-line interface. Copy and paste the command below to clone protocol directory from the GitHub repository:>git clonehttps://github.com/ds21uab/STAR_protocols_GV_calling.gitTiming: 1\u00a0min2.a.>pwdTo determine the exact location of the \u2018STAR_protocols_GV_calling\u2019 directory using command-line, type command pwd from the directory where \u201cSTAR_protocols_GV_calling\u201d is locatedThe output should look something like thisThe output of the pwd command shows that user is in the project directory, which is in the /home/dsahu/ directory.b.>nano \u223c/.bash_profileOpen and edit the bash profile using your favorite text editorc.>export protocol_dir=\"/home/dsahu/project\"Add the output of the pwd command to the last line.d.Press \u2018control\u2019\u00a0+ \u2018x\u2019 key to exit. You will be prompted to save the file. Press \u2018Y\u2019 for Yes to save the changes and \u2018enter\u2019.e.>source \u223c/.bash_profileActivate the changes in the bash_profileSave path for the \u2018STAR_protocols_GV_calling\u2019 directory in a local or online Linux cluster.Timing: 1\u00a0min3.Navigate to STAR_protocols_GV_calling directory.>cd STAR_protocols_GV_calling/4.Navigate to \u2018scripts\u2019 directory and set executable permission to all the scripts executable using the following command>cd scripts/>chmod\u00a0+x \u2217This protocol describes a computational approach that is based on several Linux-based software including gdc-client, SAMtools , BCFtoolTiming: 20\u00a0min5.https://gdc.cancer.gov/about-data/gdc-data-processing/gdc-reference-files/.a.>cd data/reference_data/HSapiens/hg38/>tar xvzf GRCh38.d1.vd1.fa.tar.gzUnzip and place the reference human genome FASTA sequence in the \u2018/data/reference_data/HSapiens/hg38/\u2019 directory.b.>module load samtools/1.12SAMtools should be loaded in the system PATH. We used the command below to load samtools in our online Linux cluster.c.>samtools faidx GRCh38.d1.vd1.faIndex the FASTA sequence.Navigate to \u2018data\u2019 directory with three subdirectories \u2018BAM\u2019, \u2018reference_data\u2019 and \u2018samples\u2019. The reference_data directory contains gene annotation BED file from the GENCODE database. Download GDC reference genome (hg38) FASTA sequence (GRCh38.d1.vd1.fa.tar.gz) from 6.a.Download BAM files using the following commandNavigate to \u2018BAM\u2019 directory and download BAM files for wxs-normal samples, wxs-tumor samples, and rnaseq-tumor samples from the provided manifest files and place them in their respective directory. A GDC token is required to download BAMs from the GDC database.>cd STAR_protocols_GV_calling/data/BAM/wxs-normal/>gdc-client download -m gdc_manifest_wxs-normal.txt -t token.txt>cd STAR_protocols_GV_calling/data/BAM/wxs-tumor/>gdc-client download -m gdc_manifest_wxs-tumor.txt -t token.txt>cd STAR_protocols_GV_calling/data/BAM/rnaseq-tumor/>gdc-client download -m gdc_manifest_rnaseq-tumor.txt -t token.txtThis protocol requires whole-exome-sequences from the normal samples , whole-exome-sequences from the tumor samples (wxs-tumor) and RNA-sequences from the tumor samples (rnaseq-tumor) from The Cancer Genome Atlas (TCGA) patients available in the Genomics Data Commons (GDC) data portal . The aliTiming: 1\u00a0min7.a.>cd data/BAM/wxs-normal/>find $(pwd) -type d | awk '!/logs/' | sed '1d'\u00a0>\u00a0input.listRun the following command from the directory that contains only folders of BAM files and do not contain other information except previous logs. For example, let us create an input.list for wxs-normal BAMs.b.>mv input.list wxs-normal_input.listRename the input.list as you wish.Variant calling using VarDictJava requires an input.list that contain path to folders of BAM files. Prepare an input list for wxs-normal, wxs-tumor and rnaseq-tumor data types, separately.8.The default output for this protocol is within the following directory.>cd STAR_protocols_GV_calling/analysis/\u2022data.table (v.1.14.0)\u2022dplyr (v1.0.6)\u2022forcats (v0.5.1)\u2022stringr (v1.4.0)\u2022purrr (v0.3.4)\u2022readr (v1.4.0)\u2022tidyr (v1.1.3)\u2022tibble (v3.1.2)\u2022ggplot2 (v3.3.3)\u2022Tidyverse (v.1.3.1)\u2022survival (v3.2-13) R software and required R packages: While newer versions of some of these packages are available, this protocol was developed with R v4.0.3, RStudio v4.1.1, and the following versions of R packages:\u2022Operating system: GNU/Linux\u2022Memory: 100 GB (memory requirement depends on size of the dataset)\u2022Processors: 1 required, 5 recommendedHardware Recommendations:The script in this protocol follows Simple Linux Utility for Resource Management (SLURM)-based schema and requires submission of the job to an online Linux cluster. The steps discussed below\u00a0loop over each BAM file or the downstream output files that were generated. Therefore, it is easier to submit similar jobs as job arrays, which will create a number of independent jobs (corresponding to the defined number of tasks) and execute them simultaneously. It is common to load pre-installed software as an environmental module available on the Linux cluster. User needs to load the relevant modules on their cluster. This protocol expects that R, VarDictJava, SAMtools and BCFtools are installed on your local or online Linux cluster and loaded in the system PATH.Note: Run parts 1-5 separately for wxs-normal samples, wxs-tumor samples, and rnaseq-tumor samples.A typical workflow to call germline variants from the wxs-normal samples, wxs-tumor samples, and rnaseq-tumor samples (the three \u2018data types\u2019 in this protocol) requires six parts. The first part is variant calling using VarDictJava. The second part is the preprocessing of VCFs and extraction of genotype status of each variant for each sample. We then perform union of all the unique variants from all three data types. The third part is to calculate the sequencing coverage of the union of unique variants. The fourth part is to merge genotype status file and sequencing coverage file of each sample to determine the status of each variant after correction for low sequence coverage. The variant status at positions with fewer than ten reads for a given sample is changed to unknown. The fifth part is to combine variant status file from each sample to create a large multi-sample VCF file. Finally, in the sixth part, at positions at which variant status is listed as unknown in the wxs-normal samples (because of low sequence coverage) we will insert variant calls made from the corresponding wxs-tumor samples. If the variant status is still unknown in a normal sample but is called in the corresponding rnaseq-tumor sample, then the rnaseq-derived variant is inserted to create the final combined wxs-rnaseq variant call set.Timing: 2\u00a0h 1.a.>nano VariantCallingFrom_VarDict.shNavigate to \u2018scripts\u2019 directory and open VariantCallingFrom_VarDict.sh script in your favorite text editor.b.>#SBATCH\u00a0-N 1 ##number of nodes>#SBATCH\u00a0--cpus-per-task=5 ##number of cpus per task>#SBATCH\u00a0-mem=100Gb ##memory requested per node in GB>#SBATCH\u00a0-t 02:00:00 ##time limit hrs:min:sec>#SBATCH\u00a0-p partition ##partition requested in the cluster>#SBATCH\u00a0-A account ## account to be charged>#SBATCH\u00a0-e slurm-%j.err ##standard error>#SBATCH\u00a0-output slurm-%j.out ##standard output>#SBATCH\u00a0-array=1-5 ##number of array jobs is from 1 to 5Note: Set the number of nodes, number of cpus, requested memory, and time in the job script according to the number of samples to be processed and resources available on the user online Linux cluster. The numbers designating the array jobs correspond to the numbers of the jobs in the input list.As the script in this protocol follows SLURM based schema. The structure of the job script is described below.c.>input_list=\"$protocol_dir/STAR_protocols_GV_calling/data/BAM/wxs\u00a0-normal/wxs-normal_input.list\"As different data types are processed, set the path in the script to the input list that corresponds to the data type See . For exad.>cd STAR_protocols_GV_calling/analysis/VCFs_from_VarDict/The default output for this step is stored in the following directory.e.>mkdir wxs-normal>mkdir wxs-tumor>mkdir rnaseq-tumorUser should create directory \u2018wxs-normal\u2019, \u2018wxs-tumor\u2019, and \u2018rnaseq-tumor\u2019 in the \u2018VCFs_from_VarDict\u2019 directory to store VCFs for respective data types.f.>out_VCF=\"$protocol_dir/STAR_protocols_GV_calling/analysis/VCFs_fr\u00a0om_VarDict/wxs-normal\"Set output path in the script accordingly See . For exag.>sbatch$protocol_dir/STAR_protocols_GV_calling/scripts/VariantCallingFrom_VarDict.shRun script using the following commandh.i.>#PBS -k o ##keep the job output>#PBS -N JobName ## name of the job>#PBS -l nodes=1 ##number of nodes>#PBS -l ncpus=5 ## number of cpus>#PBS -l mem=100Gb ## memory requested per node in Gb>#PBS -l walltime=02:00:00 ## time limit hrs:min:sec>#PBS -q queue ##partition requested in the cluster>#PBS -e torque-%j.err ##standard error>#PBS -o torque-%j.out ##standard output>#PBS -t 1-5 ##number of array jobs is from 1 to 5Change the structure of the job script in the VariantCallingFrom_VarDict.sh as described belowii.>qsub$protocol_dir/STAR_protocols_GV_calling/scripts/VariantCallingFrom_VarDict.shRun script using the following commandIf the user online Linux cluster supports Terascale Open-source Resource and QUEue Manager (TORQUE) system,Variant calling on wxs-normal BAMs, wxs-tumor BAMs, and rnaseq-tumor BAMs using VarDictJava. The settings were set as default except for requiring mapping quality greater than 30, base quality greater than 25, a minimum of 3 reads supporting a variant, minimum allele frequency of 5%, no structural variant calling, and the removal of duplicate reads.Timing: 30\u00a0min2.a.>cd STAR_protocols_GV_calling/analysis/PASS_variants/The default output for this step is stored in the following directory.b.>mkdir wxs-normal>mkdir wxs-tumor>mkdir rnaseq-tumorUser should create directory for each data type in the \u2018PASS_variants\u2019 directory to store outputs for respective data type.c.>nano extract_pass_variants.sh>out_VCF=\"$protocol_dir/STAR_protocols_GV_calling/analysis/PASS_va\u00a0riants/wxs-normal\"Set output path in the extract_pass_variants.sh script for each data type accordingly. For example, let\u2019s set the output path to store indexed passed variants for wxs-normal samplesd.>$protocol_dir/STAR_protocols_GV_calling/scripts/extract_pass_variants.shRun the script from the directory where VCFs from VarDictJava are located.Index VCF file and extract variants that received a PASS (shown in the \u201cfilter\u201d column) in the indexed VCF file.3.Remove header section from VCFs. Run the script from the directory where indexed pass variants are located.>$protocol_dir/STAR_protocols_GV_calling/scripts/remove_header_VCF.sh4.Extract unique variants from all the passed variants VCF files. Run script from the directory where VCF files without header are located. This script will output allVCF_variants.txt file , and unique_variants.txt file . User may rename the unique_variants.txt for each data type accordingly.>$protocol_dir/STAR_protocols_GV_calling/scripts/extract_uniqueVariants.shNote: This bash script extract_uniqueVariants.sh includes R script extract_uniqueVariants.R. See \u2018Potential 5.a.>cd STAR_protocols_GV_calling/analysis/genotype_status/The default output for this step is in the following directory.b.>mkdir wxs-normal>mkdir wxs-tumor>mkdir rnaseq-tumorUser should create separate directory in the \u2018genotype_status\u2019 directory to store outputs for each data types.c.>nano extract_genotype_from_passedVariants.sh>out_VCF=\"$protocol_dir/STAR_protocols_GV_calling/analysis/genotyp\u00a0e_status/wxs-normal\"Set output path in the extract_genotype_from_passedVariants.sh script for each data type. For example, set output path for wxs-normal samplesd.>$protocol_dir/STAR_protocols_GV_calling/scripts/extract_genotype_from_passedVariants.shRun script from the indexed passed variants VCFs directoryExtract genotype status for each variant from each sample. Here, we extract chromosome, position, reference allele, observed alternate allele, mutation status, total depth, and variant depth of each variant from each VCF file.6.a.>library (data.table)Load required R packageb.>wxs_normal=fread>wxs_tumor=fread>rnaseq_tumor=freadLoad data in Rc.>wxs_normal <- setDF >wxs_tumor <- setDF (wxs_tumor)>rnaseq_tumor <- setDF (rnaseq_tumor)Convert data.table to data.framed.>variant_list\u00a0= listConvert into liste.>union_variants= Reduce merge, by.y\u00a0= c, all.x=TRUE, all.y=TRUE),\u00a0\u00a0variant_list)Merge list based on chromosome, position, reference allele, and alternate allelef.>drop <- c>union_variants<-union_variantsDrop unwanted columnsg.>fwriteSave union variants in the analysis directoryh.>quitTerminate the current R session. User will be prompted to save the workspace. Please type \u2018no\u2019 if you wish not to save the workspace.i.>mv union_wxs_rnaseq_variants.txt union_wxs_rnaseq_variants.bedNow we are outside R. Convert the union_wxs_rnaseq_variants.txt file into .bed file in the Linux command lineMerge unique variants obtained from the wxs-normal, wxs-tumor and rnaseq-tumor data type by chromosome, position, reference allele and observed alternate allele. Here, we will keep the union of unique variants obtained from all the three data types and save them as union_wxs_rnaseq_variants.bed file. User can use R to merge the unique variants obtained from the three data typesTiming: 2 h7.a.>cd STAR_protocols_GV_calling/analysis/variant_coverageThe default output for this step is in the following directory.b.>mkdir wxs-normal>mkdir wxs-tumor>mkdir rnaseq-tumorUser should create separate directory for each data type in the \u2018variant_coverage\u2019 directory to store output for each data type.c.>nano UnionVariantCoverageFrom_Samtools_depth.sh>out_depth=\"$protocol_dir/STAR_protocols_GV_calling/analysis/varia\u00a0\u00a0\u00a0nt_coverage/wxs-normal\"Set output path in the script accordingly. For example, set output path for wxs-normal samplesd.>sbatch$protocol_dir/STAR_protocols_GV_calling/scripts/UnionVariantCoverageFrom_Samtools_depth.shCRITICAL: union_wxs_rnaseq_variants.bed file and input.list that contain the location of the BAM files are required as input files. Set path of input files in the script. Set array jobs with number of jobs in the input list.Run the script using the following commandIn this step, we will calculate the sequencing coverage of union variants (listed in the union_wxs_rnaseq_variants.bed file) on all BAMs requiring mapping quality greater than 30.Timing: 5\u00a0min8.a.Input files: prepare a tab separated input_genotype_samdepth.txt file as shown in b.>cd STAR_protocols_GV_calling/analysis/variant_statusThe default output for this step is in the following directory.c.>mkdir wxs-normal>mkdir wxs-tumor>mkdir rnaseq-tumorUser should create separate directory for each data type in the \u2018variant_status\u2019 directory to save output for each data type.d.>STAR_protocols_GV_calling/analysis/variant_status/wxs-normal/Set path in the out_filename column in the input_genotype_samdepth.txt file to save results for each data type. For example, set output path for wxs-normal samplese.Place the input_genotype_samdepth.txt file in the analysis directory.f.>sbatch$protocol_dir/STAR_protocols_GV_calling/scripts/determineVariantStatus.shNote: The bash script determineVariantStatus.sh includes R script determineVariantStatus.R. Set array jobs with number of jobs in the input list.Run the script using the following commands from the analysis folder.Determine status of each variant using the genotype status and sequencing coverage of each variant in each sample. The variant status at positions fewer than ten reads for a given patient is changed to unknown. If the sequencing coverage of the variant is more than ten reads and no alternate allele is reported for that variant, then the status of that given variant is changed to Homozygous reference.Timing: 1 h9.a.>cd STAR_protocols_GV_calling/analysis/combined_variant_statusThe default output for this step is in the following directory.b.>mkdir wxs-normal>mkdir wxs-tumor>mkdir rnaseq-tumorUser should create a separate directory in the \u2018combined_variant_status\u2019 to save output for each data type.c.>sbatch$protocol_dir/STAR_protocols_GV_calling/scripts/combineAllSamplesVariantStatus.shCRITICAL: The bash script combineAllSamplesVariantStatus.sh includes the R script combineAllSamplesVariantStatus.R. In the R script set full path for input files and the output path where combined variants from all samples to be saved. The memory and time may vary based on the number of samples to be processed.Run the script using the following commandsCombine variant status file of each sample to create a multi-sample VCF which stores the variant status from all the samples.10.>file=combinedVariantStatusFromAllSamples.txt>head -1 $file\u00a0>\u00a0header.txt>sed '1d' $file\u00a0>\u00a0combined_variants_without_header.txt>split -d -l 50000 combined_variants_without_header.txt -a 4 \u2013\u00a0additional-suffix=.txt segment_The script above outputs a large table which contains variant status from all the samples. However, we need to do a bit of legwork to get our data into reasonable shape. First, the combined variants status file contains millions of variants, and it will take too much of memory and time for data pre-processing. Therefore, we will first split the large-combined variant status file into chunks of small files keeping the same number of columns but split based on fixed number of rows. In this file, each line corresponds to one variant.a.-d #add numeric suffix to split fileb.-l #number of lines in each of the smaller filesc.-a #suffix lengthd.--additional-suffixe.segment_ #split file prefixWith the above commands we did the following:11.Prepare input list for pre-processing>find $(pwd) -type f -name \"segment\u2217\"\u00a0>\u00a0input.txt>sort -V input.txt\u00a0>\u00a0input.list12.a.Input files: input.list and header.txt fileb.Output files: processed combined variant status file for each split filec.>sbatch$protocol_dir/STAR_protocols_GV_calling/scripts/processCombinedVariants.shNote: The bash script processCombinedVariants.sh includes R script processCombinedVariants.R. Set array jobs with number of jobs in the input list.Run script from the folder where input.list and header.txt is storedThe pre-processing step takes each split file from the input.list and first removes the unwanted columns. Then for each variant it extracts the chromosome, base-position, the reference allele, the observed alternate allele and the variant status for each sample. From the fifth column onwards, the column names change to the respective sample names from TCGA. Modify the script if you have different sample names.13.Once preprocessing of each split file is finished, the next step is to merge them. Run the command below from the command line interface>awk 'NR\u00a0== 1 || FNR\u00a0>\u00a01' processed_CombinedVariants_\u2217.txt >processed_CombinedVariantsFromAllSamples.txt14.Remove variants which are either unknown or Homozygous reference across all samples from the processed_CombinedVariantsFromAllSamples.txt file. Run the script below from the directory where the processed_CombinedVariantsFromAllSamples.txt file is located. The script will output the processed_CombinedPotentialSNPs.txt file.>Rscript$protocol_dir/STAR_protocols_GV_calling/scripts/extractPotentialSNPsFromCombinedVariants.RTiming: 40\u00a0min15.a.i.processed_CombinedPotentialSNPs.txt file obtained for wxs-normal, wxs-tumor and rnaseq-tumor data types.ii.Patient list with Case_ID and Sample_Barcode. If a matched tumor /normal sample have whole exome sequencing and associated RNA sequencing file, then the different datasets from the patient will have the same Case_ID. Prepare separate patient list for wxs-normal, wxs-tumor and rnaseq tumor samples. Set the column names of each file as suggested:\u00a0wxs-normal: Case_ID and normal\u00a0wxs-tumor: Case_ID and tumor\u00a0rnaseq-tumor: Case_ID and rnaseqInput files:b.>cd STAR_protocols_GV_calling/data/samplesPatient list for this protocol is provided in the sample directory and can be accessed by the following commandc.>sbatch$protocol_dir/STAR_protocols_GV_calling/scripts/fillUnknownVariantsInNormalSamples.shCRITICAL: The bash script fillUnknownVariantInNormalSamples.sh includes R script fillUnknownVariantInNormalSamples.R. Set full path for the patient files and the processed_CombinedPotentialSNPs.txt file in the script.Run the script using the following command. Run the script from the analysis folder.d.i.combined_normal_unknown_filled.txtii.combined_normal_unknown_filled_arranged.txtiii.final_merged_wxs_rnaseq_variants.txt iv.final_variantsForAnnovar.txt The output from step six includes four filesThe goal of this step is to insert variant status at positions listed as unknown in the wxs-normal sample from the corresponding tumor sample. If the variant status is still unknown in the normal sample, then we will insert variant status of the rnaseq-derived variant from the corresponding rnaseq-tumor sample. This step will allow us to create a final combined wxs-rnaseq variant call set.16.Use Annovar software to perform functional annotation of germline variants such as in which gene the variant is located, whether variant is in the exonic, intronic, UTR, promoter, or splicing region of a gene or is in the intergenic region.17.Use Annovar to determine the allele frequencies of germline variants in whole-genome sequencing data from various ethnic populations such as listed in the GnomAD database.18.Calculate the allele frequency of the germline variant in this study using the following formula:19.Calculate the correlation of the allele frequencies in the four variant call sets with each other and with the allele frequency from GnomAD using GGally R package. In our study, the allele frequencies in the combined data set from \u223c500 patients gave a correlation coefficient of >0.9 with the allele frequencies in GnomAD.20.Determine whether the germline variant calls separate patients based on self-reported race using the PLINK software.21.Determine the genetic linkage between germline variants at different loci by performing Linkage Disequilibrium analysis.22.a.>libraryLoad required R packageb.i.Homozygous_ref: 0ii.Heterozygous: 1iii.Homozygous_alt: 2Variant genotype is encoded as character vector. Model genotype in ordinal scalec.>labels <- variant_genotypeSave variant genotype for each patientd.>data <- read.csvDownload the survival data to the working directory see and loade.>Risk_survival= survfit\u223c labels,\u00a0\u00a0data=data)>pvalue_Risk_survival= survdiff\u223c\u00a0\u00a0\u00a0labels, data=data, rho=0)>p.val= 1 - pchisq - 1)Survival analysisf.>par, cex.axis=1, cex.lab=1, cex.main=1,font.axis=1, font.lab=1, par(font=1))>plot, col=c,mark.time=TRUE, xlab=\"Follow up in months\", ylab=\"Overallsurvival\", lwd=1)>text))Plot the Kaplan-Meier curves >par \u223c labels\u00a0+ Age\u00a0+\u00a0Percent_aneuploidy\u00a0+ Histology\u00a0+ Grade\u00a0+\u00a0IDH_status\u00a0+ Mutation_count\u00a0+ Chr7gainORChr10loss\u00a0+ MGMT_promoter_status\u00a0+ Chr1OR19q_codeletion\u00a0+\u00a0Treatment_site\u00a0+ PC3, data=data)>summary(variant.cox)Principal component 3 (PC3)Multivariate Cox regression analysis : ConvertDetermine for a given germline variant whether there is a significant difference in survival outcome in the minor allele compared to the reference major allele . Here, we have compared the overall survival outcome between LGG patients (n=507) with different rs1131397 (Chr1-154965759-G-C) genotypes. See 23.The output for multivariate Cox regression analysis includes coefficient, hazard ratio, z-score and p value for each variant. As multiple variants are tested, it is important to correct the p values after Cox regression. In our study, 196,022 variants were tested. False discovery was performed through Bonferroni Correction.24.Determine whether a given germline variant is predictive of increase or decrease in tumor mutation burden and responsiveness to immune checkpoint chemotherapy .25.For germline variants that are significantly associated with a phenotype , determine whether the variant is associated with differences in gene expression of nearby genes by performing expression quantitative trait loci (eQTLs) analysis. For more details on other downstream analyses that can be performed with germline variants, please refer to .In the previous steps, we have identified germline variants from the whole-exome sequences and rna-sequences in the TCGA-LGG cohort. Several downstream analyses can be performed to evaluate the quality and clinical relevance of the identified germline variants.This protocol presents instructions to integrate the whole-exome sequencing datasets from normal and tumor samples and RNA sequencing dataset from the tumor samples of the TCGA-LGG cohort to determine the germline variants.Germline variants from the TCGA datasets are typically called in wxs-normal blood samples. This approach captures exonic region of the genome, allowing one to find the variants within genes. However, sequence (and variants) outside the exonic region, such as the promoters, introns, intron-exon boundaries and UTRs are also captured as a byproduct of whole-exome capture and are present in wxs data . In thisThere were some choices we made in our protocol that may be changed by a user. We use a minimal sequencing coverage threshold of 10 before a variant status is called because that gives us enough confidence that if a variant was present at that locus, it should have been sampled by then (and sampled at least 3 times). This is an arbitrary threshold and other users can set other thresholds if they so desire, but one has to balance the need for finding sufficient patients with a minor allele with the need to be 100% confident that the variant call is correct. In addition, we have a filter that a minimum of three reads should support the reported variant, and this too can be changed by the user. Note that we also removed reads that may have come from PCR duplicates, so that only one read with the identical start position is kept among all the duplicates. Finally, we turned off the calling of structural variants in this analysis so that no insertions, deletions, duplications, inversions, or large copy number variations were called.Our method has the benefit of determining the genotype status of a variant with low sequencing coverage in wxs-normal sample, by taking the genotype status of that variant from the corresponding wxs-tumor sample, where the variant at that position may be supported with higher sequencing coverage. However, if we are still unable to determine the genotype status, we insert the genotype status from the rnaseq counterparts. This method has not significantly affected the accuracy of the variant call in our study because the allele frequency calculated from the rnaseq tumor datasets were significantly positively correlated (r\u00a0>\u00a00.98) with the allele frequencies from the wxs-normal, wxs-tumor and combined-wxs-rnaseq variant call set. Importantly, allele frequencies in our study were significantly positively correlated with the allele frequency of germline variants listed in the population gnomAD database, supporting the reliability of our calls. Specifically, the allele frequency in GnomAD correlated (Pearson\u2019s correlation coefficient) with wxs-normal variant set (r\u00a0>\u00a00.963), wxs-tumor variant set (r\u00a0>\u00a00.964), rnaseq-tumor variant set (r\u00a0>\u00a00.937) and combined-wxs-rnaseq variant set (r\u00a0>\u00a00.947) .Once the genotype status for each variant in each patient is identified, the user can check the survival outcome of minor allele compared to major allele for each variant. The Kaplan-Meier survival analysis for the three genotype in variaThis protocol performs variant calling, pre-processing VCFs, calculating sequencing coverage, determining variant status, merging variant status from multi-samples, determining genotype of variant with low sequencing coverage from the respective wxs-normal, wxs-tumor and rnaseq-tumor BAM files. As these calculations are computationally intensive, we recommend running the protocol on a high-performance cluster. The memory and number of CPUs required to run each step See may varyhttps://gdc.cancer.gov/access-data/obtaining-access-controlled-data.A token is required for downloading controlled test dataset See from theDownloading GDC datasets and germline variant calling consumes considerable amount of time and computational resources. We recommend processing a small number of wxs and rnaseq BAM files to guide the computational resources requirement for the future analyses. This protocol requires user to modify the time needed in the job script for each step accordingly.This protocol follows SLURM based schema and requires knowledge of the bash and R scripting language. Newer users may need to learn some basic Linux commands such as ls, echo, cd, rm, mv, mkdir, find, nano, awk, sed, grep, pwd, bash, and sbatch used in this protocol to execute it fully.A common warning may be displayed for the bam index: BAM index file is older than BAM file (corresponding protocol step: >samtools index \u2217.bamIt is a warning message for bam index file. It can be ignored if you are sure that the index file is up to date. You can create the more recent index file of the bam file using the command below and the warning message should go away.A common error may be displayed: These module(s) or extension(s) exist but cannot be loaded as requested \u00a0<\u00a0value: \u2018names\u2019 attribute must be the same length as the vector and Anindya Dutta (duttaa@uab.edu).Further information and requests for resources should be directed to and will be fulfilled by the technical and lead contacts, Divya Sahu (This study did not generate new unique reagents."} +{"text": "This study aimed to assess the predictive ability of 18F-FDG PET/CT radiomic features for MYCN, 1p and 11q abnormalities in NB.One hundred and twenty-two pediatric patients with NB were retrospectively enrolled. Significant features by multivariable logistic regression were retained to establish a clinical model (C_model), which included clinical characteristics. 18F-FDG PET/CT radiomic features were extracted by Computational Environment for Radiological Research. The least absolute shrinkage and selection operator (LASSO) regression was used to select radiomic features and build models (R-model). The predictive performance of models constructed by clinical characteristic (C_model), radiomic signature (R_model), and their combinations (CR_model) were compared using receiver operating curves (ROCs). Nomograms based on the radiomic score (rad-score) and clinical parameters were developed.n = 86) and a test set (n = 36). Accordingly, 6, 8, and 7 radiomic features were selected to establish R_models for predicting MYCN, 1p and 11q status. The R_models showed a strong power for identifying these aberrations, with area under ROC curves (AUCs) of 0.96, 0.89, and 0.89 in the training set and 0.92, 0.85, and 0.84 in the test set. When combining clinical characteristics and radiomic signature, the AUCs increased to 0.98, 0.91, and 0.93 in the training set and 0.96, 0.88, and 0.89 in the test set. The CR_models had the greatest performance for MYCN, 1p and 11q predictions (P < 0.05).The patients were classified into a training set (The pre-therapy 18F-FDG PET/CT radiomics is able to predict MYCN amplification and 1p and 11 aberrations in pediatric NB, thus aiding tumor stage, risk stratification and disease management in the clinical practice. Neuroblastoma (NB), the most common extracranial solid pediatric tumor, accounts for about 8\u201310% of all childhood cancer and 12\u201315% of childhood cancer mortality . It is t123I-Metaiodobenzylguanidine (123I-MIBG) scintigraphy is a standard practice in the diagnosis of NB in the diagnosis and follow-up of NB patients. 18F-FDG PET imaging has been reported to be equal or superior to 123I-MIBG scan for delineating NB disease extent in the chest, abdomen, and pelvis pathologically confirmed NB; (2) age \u2264 18 years at diagnosis; (3) complete PET/CT imaging data; (4) complete clinical information; (5) no cancer therapy before PET/CT imaging; (6) complete MYCN amplification and 1p and 11q aberrations data. Subsequently, 17 cases were excluded because of unavailable MYCN, 1p and 11q information, and 122 patients were included in this study. These patients were randomly divided into training set and test set with a ratio of 7:3. This retrospective study was approved by Institutional Review Board of our hospital and the requirement of written informed consent was waived.MYCN amplification and 1p and 11q aberrations were determined using FISH from paraffin-embedded tissue obtained by biopsy or surgery at initial diagnosis according to the previously published method . AccordiPatient gender, age, neuron-specific enolase (NSE), serum ferritin (SF), lactate dehydrogenase (LDH), vanillylmandelic acid (VMA), homovanillic acid (HVA), maximum tumor diameter (MTD) in Ultrasound, and MTD in CT and/or MRI.All patients underwent whole body scan on the PET/CT scanner in accordance with EANM guidelines , 20 and Univariate analysis was performed to compare the differences in clinical characteristics. Based on the selected characteristics, a clinical model (C-model) was established.n = 18), shape features (n = 14), gray level co-occurrence matrix (GLCM) features (n = 24), gray level run length matrix (GLRLM) features (n = 16), gray level size zone matrix (GLSZM) features (n = 16), neighboring gray tone difference matrix (NGTDM) features (n = 5), and gray level dependence matrix (GLDM) features (n = 14) were extracted from the original and the pre-processed images. The following methods were used in the imaging processing: wavelet filtering, square, square root, logarithm, exponential and gradient filtering were obtained to assess the reliability of variables using the features extracted from the two sets of ROIs portrayed separately by two different nuclear medicine physicians in 24 out of the 122 patients with NB after 2 months. Because of imbalanced datasets, synthetic minority oversampling technique (SMOTE) was used to improve random oversampling in the training set. Least absolute shrinkage and selection operator (LASSO) was applied for variable selection and regularization in the training set. Predictive R_models were built by logistic regression and the radiomic score (rad-score) for each patient was computed based on the selected radiomic features. Additionally, the selected clinical characteristics combined with radiomics features were used to construct the combination model (CR_model). All models were built and trained in the training set, and the prediction performance was evaluated in the training and test sets. Ten-fold cross-validation was applied to prevent model overfitting in the training process. Receiver operating characteristic (ROC) curve and area under curve (AUC) were employed for the evaluation of the diagnostic performance in the training and test sets.www.python.org) and R . The Python packages of \u201csklearn,\u201d \u201cnumpy,\u201d and \u201cpandas\u201d were used for LASSO binary logistic regression and ROC curve; the \u201cscipy\u201d was for analyzing statistical properties; the \u201cimblearn\u201d was for SMOTE. The R package \u201crms\u201d was employed to create nomograms. The t-test or Mann-Whitney U-test was applied for univariate analysis, and p < 0.05 with a 95% confidence interval was considered as statistical significance. AUC-ROC curve was calculated for evaluating the diagnostic performance of models. AUC ranging from 0.5 to 1.0 is commonly used as a measure of classifier performance. A value of 0.5 is equal to random guessing, while 1.0 means a perfect classifier.Statistical analyses were performed with Python . Between 1p-positive and negative cases, NSE, LDH, VMA, MTD in Ultrasound and MTD in CT/MRI were distinct (All p < 0.05). Between 11q-positive and negative cases, age, SF, LDH, VMA, and HVA were distinct (All p < 0.05) .The total of 2,632 radiomic features were extracted from PET/CT images using pyradiomics. After assessing the robustness, 1,623 out of 2,632 features retained for model building, with intraclass correlation coefficients (ICC) > 0.75. In respect of C-model constructed by logistic regression and trained in the training set, 4 clinical characteristics were selected for MYCN prediction, with 3 characteristics for 1p prediction and 3 characteristics for 11q prediction. As for R_model (radiomics signature) establishment, 6 radiomic features were chosen for MYCN prediction, with 8 features for 1p prediction and 7 features for 11q prediction .In regard to CR_model construction, eight features were chosen for MYCN prediction, which included 4 clinical characteristics and 2 PET, 2 CT features , 3. ElevRad-scores were calculated by the following formula:Rad_score_MYCN = \u22122.6446+ 0.17750 \u00d7 PET_wavelet-LLH_glszm_GrayLevelNonUniformity+ 0.88251 \u00d7 PET_wavelet-HHH_glszm_SizeZoneNonUniformity\u2013 0.00069 \u00d7 CT_exponential_glrlm_LongRunEmphasis\u2013 0.02217 \u00d7 CT_wavelet-HHL_firstorder_MaximumRad_score_1p = 2.9612\u2013 115.24 \u00d7 PET_squareroot_ngtdm_Contrast\u2013 0.29673 \u00d7 PET_logarithm_firstorder_Minimum+ 0.04218 \u00d7 PET_wavelet-LLH_glrlm_LongRunLowGrayLevelEmphasis+ 2.1217 \u00d7 PET_wavelet-HHH_glszm_SmallAreaHighGrayLevelEmphasis\u2013 5.5262 \u00d7 PET_wavelet-HHH_glszm_LowGrayLevelZoneEmphasis\u2013 5.1213 \u00d7 CT_exponential_glszm_SmallAreaEmphasisRad_score_11q = \u22122217.3\u2013 147.63 \u00d7 PET_wavelet-LHL_gldm_DependenceNonUniformityNormalized\u2013 0.41560 \u00d7 CT_wavelet-LLL_glrlm_RunVariance\u2013 0.59915 \u00d7 CT_wavelet-LHL_firstorder_Median+ 58.736 \u00d7 CT_wavelet-LHL_glcm_Imc1\u2013 14.536 \u00d7 CT_wavelet-HLL_glrlm_LowGrayLevelRunEmphasis+ 2232.9 \u00d7 CT_wavelet-HHH_firstorder_Entropy.p-values of radiomic features are shown in p < 0.001). NB with MYCN, 1p and 11q positive had higher Rad-score than those with negative in both the training and test sets.The Nomogram score (Nomo_score) was calculated by the following formula :Nomo_score_MYCN = \u22120.7569 + 0.0064 \u00d7 LDH + 2.4857 \u00d7 Rad_score_MYCNNomo_score_1p = \u22120.5175 + 0.0017 \u00d7 LDH + 1.0476 \u00d7 Rad_score_1pNomo_score_11q = \u22120.3897 \u2013 0.0020 \u00d7 LDH + 0.0088 \u00d7 SF + 1.6657 \u00d7 Rad_score_11qThe nomogram was created based on the training set, which represented individualized prediction and visualized proportion of each factor .To evaluate the performance in predicting MYCN, 1p and 11q status, C_model, R_model and CR_model were compared. The predictive abilities of models were shown in 18F-FDG PET/CT-based radiomics had an extremely important role in predicting MYCN amplification and 1p and 11q aberrations. In particular, CR_model was suggested to be the best model for the prediction of MYCN, 1p and 11q status with the largest AUCs in the training and test sets.Considering the well-established role of MYCN, 1p and 11q abnormalities in the prognosis of NB, identifying these events are crucial for risk stratification. This study provided three distinct forms of predictive models for identifying MYCN and chromosomal abnormalities in a non-invasive way, demonstrating that pre-therapy Recently, clinical variables (such as LDH and SF) have been demonstrated to be prognostic biomarkers in large-scale studies, which suggested to reconsider utilizing LDH and SF as NB risk stratification factors , 23. In In this study, radiomic features were selected to construct CR_model for predicting MYCN, 1p and 11q abnormalities, including: PET_wavelet-LLH_glszm_GrayLevelNonUniformity, PET_wavelet-HHH_glszm_SizeZoneNonUniformity, CT_exponential_glrlm_LongRunEmphasis, CT_wavelet-HHL_firstorder_Maximum, PET_squareroot_ngtdm_Contrast, PET_logarithm_firstorder_Minimum, PET_wavelet-LLH_glrlm_LongRunLowGrayLevelEmphasis, PET_wavelet-HHH_glszm_SmallAreaHighGrayLevelEmphasis, PET_wavelet-HHH_glszm_LowGrayLevelZoneEmphasis, CT_exponential_glszm_SmallAreaEmphasis, PET_wavelet-LHL_gldm_DependenceNonUniformityNormalized, CT_wavelet-LLL_glrlm_RunVariance, CT_wavelet-LHL_firstorder_Median, CT_wavelet-LHL_glcm_Imc1, CT_wavelet-HLL_glrlm_LowGrayLevelRunEmphasis, and CT_wavelet-HHH_firstorder_Entropy. The majority of these features (12/16) were not derived from the primary image but from wavelet decomposition images, possibly because wavelet transformed features contained high-order information that may be more helpful for MYCN, 1p and 11q prediction. Previous studies have revealed the potential value of wavelet features in histologic subtype prediction and prognostic assessment , 26. In 123I-MIBG scan is the most frequently used imaging modality and is regarded as standard of care in patients with NB. In comparison with 18F-FDG PET/CT, 123I-MIBG scan is carried out over 2 days and the image quality is less ideal that could post a challenge to inexperienced physicians the status of MYCN, 1p and 11q can be used for risk stratification, therapy selection, therapy response monitor and prognosis prediction.The potential clinical significance of the present study included: (1) radiomics based on pre-therapy This study had limitations. Small size cohort from single center may influence the generalized ability, sensitivity and specify of the predictive models. Therefore, prospective larger cohort from multi-center is necessary to validate the results and improve the reliability of models for MYCN, 1p and 11q predictions in NB.The models developed by the pre-therapy 18F-FDG PET/CT radiomic signature and clinical parameters are able to predict MYCN amplification and 1p and 11 aberrations in pediatric NB, thus risk stratification, disease management and guiding personalized malignancy therapy in the clinical practice.The original contributions presented in the study are included in the article/The studies involving human participants were reviewed and approved by Beijing Friendship Hospital, Capital Medical University. Written informed consent from the participants' legal guardian/next of kin was not required to participate in this study in accordance with the national legislation and the institutional requirements.LQ, SY, and SZ made substantial contributions to study design, image acquisition, data analysis and interpretation, and new software creation in this work. SZ, HQ, WW, YK, LL, JL, and HZ contributed writing and/or revising the manuscript. JY and JL approved all versions to be published and were responsible for all aspects of this study. All authors contributed to the article and approved the submitted version.This study was supported by Capital Health Development Research Project (No. 2020-2-2025), National Natural Science Foundation of China , and National Key Research and Development Plan (No. 2020YFC0122000).LL was employed by the company Sinounion Medical Technology (Beijing) Co., Ltd. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher."} +{"text": "The Carbon Ore Resources Database (CORD) is a working collection of 399 data files associated with carbon ore resources in the United States. The collection includes spatial/non-spatial, filtered, processed, and secondary data files with original data acquisition efforts focused on domestic coal resources. All data were acquired via open-source, online sources from a combination of 18 national, state, and university entities. Datasets are categorized to represent aspects of carbon ore resources, to include: Geochemistry, Geology, Infrastructure, and Samples. Geospatial datasets are summarized and analyzed by record and dataset density or the number of records or datasets per 400 square kilometer grid cells. Additionally, the \u201cCORD Platform,\u201d an ArcGIS Online geospatial dashboard web application, enables users to interact and query with CORD datasets. The CORD provides a single database and location for data-driven analytical needs associated with the utilization of carbon ore resources. Specifications TableValue of the Data\u2022The Carbon Ore Resources Database (CORD) enables broader understanding and data-driven analyses of in-situ-, supply chain-, and consumer- based carbon resources, by providing a single location to efficiently access carbon ore resource datasets for a range of applications and end users. The systematized database organizes carbon ore data so it can easily be retrieved and analyzed.\u2022Increased accessibility to systematized carbon ore resource datasets benefits research and development scientists, analysts, developers, economists, and engineers from various organizations. These entities include coal mining companies; power plant operators; government agencies; non-governmental organizations (NGOs); and natural resource managers.\u2022Access to integrated, comprehensive carbon ore resource data are necessary for a range of applications, including optimizing coal production and deliveries to existing and new markets; mitigating the impacts of coal ash disposal, acid mine drainage, and greenhouse gas emissions; increase beneficial use of coal and coal by-products; and extraction of specific coal sources for carbon-based products and rare earth elements.\u2022Broader applications include decision support for carbon management and policy, identifying opportunities for the development of coal and carbon management technologies.\u2022Geospatial datasets within the CORD facilitate mapping and analysis using GIS (Geographic Information Systems) software.1https://edx.netl.doe.gov/dataset/cord) as two separate zipped folders, one in a geodatabase format and the other in a folder file structure.The Carbon Ore Resources Database (CORD) is a collection of 399 individual data files associated with carbon ore resources. The original data acquisition efforts focused on coal resources in the United States. Supplementary File 1 provides descriptions for each individual data file organized by category. Supplementary File 2 lists each file by name, category, data type (secondary or processed), coal filter field , data format type , spatial (raster), or table), available formats, source organization, link to the data download source, and publication citation (if available). The CORD can be downloaded from the NETL's Energy Data eXchange website The \u201cInfrastructure network\u201d category consists of nine processed data files and 90,634 records associated with coal resource infrastructure . CurrentThe \u201cSamples integrated\u201d category consists of two processed data files and 64,776 records associated with coal samples . This inThe \u201cSamples original\u201d category consists of 17 processed and secondary data files and 36,216 records associated with coal samples . This in\u2022Samples_All (tab \u201cSamples_All\u201d)\u2022Coal_Delivery_Pathways_2011_2016 \u2022Coal_Mine_Deliveries_2011_2016 \u2022Coal_Mine_Production_2011_2016 \u2022Coal_Source_Regions_Production_Deliveries_2011_2016 \u2022Power_Plant_ByProductsType_2011_2016 (tab \u201cPowerPlant_ByProdType_2011_2016\u201d)\u2022Power_Plant_Consumption_2011_2016 (tab \u201cPowerPlant_Cons_2011_2016\u201d)\u2022Power_Plant_Deliveries_and_ByProducts_2011_2016 (tab \u201cPowerPlant_Del_ByProd_2011_2016\u201d)\u2022Power_Plant_Deliveries_by_Coal_Source_Region_2011_2016 (tab \u201cPowerPlant_Del_by_CSR_2011_2016\u201d)\u2022Power_Plant_Stockpiles_2011_2016 (tab \u201cPowerPlant_Stock_2011_2016\u201d)A data dictionary (field names and descriptions) is provided in an Excel workbook for datasets that required additional processing and integration steps (Supplementary File 3), where each tab refers the following 10 datasets:\u2022CORD_Data_Script1.py - This script takes an input folder path with CSV files and exports the files into a new folder with updated and modified attribute names.\u2022CORD_Data_Script2.py - This script takes an input folder path with CSV files and exports the combined files into a new folder with an updated schema and all empty rows removed.\u2022CORD_Field_map.csv - This CSV file provides the Mapping of the input data fields to the output data fields for the data conversion script CORD_Data_Script1.py.\u2022CORD_Schema_combined.csv - This CSV file contains the combined schema for the data that is converted from multiple input CSV files to a single output CSV. It is used with the CORD_Data_Script2.py python script.Additionally, two python scripts and two CSV files associated with field mapping of the \u201cSamples_All\u201d table are provided in Supplementary File 4:2All data processing was performed using ESRI's ArcGIS ArcMap 10.7 software Secondary data files that did not require any processing were directly imported into the database. If data required filtering for explicit coal records, the field name used to filter the records was recorded Supplementary File 2, within the \u201cCoal_filter_field\u201d. Files labelled as \u201cprocessed\u201d in Supplementary File 2, required additional modification before including into the database. These data files include those in the \u201cInfrastructure network\u201d, \u201cSamples original\u201d, and \u201cSamples integrated\u201d categories. Each processed data file involves a unique method before integrating into the CORD. These methods are described by category and for each processed data file as necessary:Infrastructure network:Coal_Source_Regions_Production_Deliveries_2011_2016: This dataset was created from the original secondary data file \u201cCoal_fields_USGS\u201d https://www.eia.gov/coal/data/browser/). This North Appalachian Basin Region was further split to separate the Pennsylvania Anthracite Region. In total, 109 separate coal source regions were developed. Coal production and delivery quantities information from 2011 through 2016 were added by spatially joining (point features closest to each region) and summing values from the \u201cCoal_Mine_Production_2011_2016\u201d and \u201cCoal_Mine_Deliveries_2011_2016\u201d, respectively.Coal_Delivery_Pathways_2011_2016: This dataset was extracted from the \u201cPage 5 Fuel Receipts and Costs\u201d tab in the EIA-923 excel files Coal_Mine_Deliveries_2011_2016: This dataset was created from the \u201cCoal_Delivery_Pathways_2011_2016\u201d dataset, by dissolving on the MSHA unique identifying number, latitude, and longitude fields to obtain a single unique record for each mine. Delivery quantities were aggregated and summed for each and all years, including total delivery count from each mine. Point features were then created to represent individual mines. To obtain the name of the coal source region associated with each mine , the \u201cCoal_Source_Region_2011_2016\u201d dataset was spatially joined to the mine point features . The mine point features were then joined to the \u201cCoal_Delivery_Pathways_2011_2016\u201d dataset\u201d by a temporary \u201cDelivery_ID\u201d field (deleted after join), to obtain the coal source region names within the deliveries dataset.Coal_Mine_Production_2011_2016: This was extracted from the EIA-7A excel files Power_Plant_Deliveries_by_Coal_Source_Region_2011_2016: This dataset was extracted from the \u201cCoal_Delivery_Pathways_2011_2016\u201ddataset. First, the \u201cCoal_Delivery_Pathways_2011_2016\u201d dataset was dissolved on the unique identifying number for power plants (\u201cPlant_code\u201d) and \u201cCoal_Source_Region\u201d fields to obtain a single unique record for each unique combination of power plant and coal source region. Delivery quantities were aggregated and summed for each and all years. The \u201cCoal_Source_Region\u201d field was then pivoted to add fields for coal delivery quantity totals for each unique combination of region and year. Point features were then created to represent individual power plants.Power_Plant_Deliveries_and_ByProducts_2011_2016: This dataset was extracted from the \u201cCoal_Delivery_Pathways_2011_2016\u201d dataset and the \u201c8A Annual Byproduct Disposition\u201d tab in the EIA-923 excel files Power_Plant_Consumption_2011_2016: This dataset was extracted from the \u201cPage 1 Generation and Fuel Data\u201d tab within the EIA-923 excel files Power_Plant_Stockpiles_2011_2016: This dataset was extracted from the \u201cPage 2 Coal Stocks Data\u201d tab within the EIA-923 excel files Power_Plant_ByProductsType_2011_2016: This dataset was extracted from the \u201c8A Annual Byproduct Disposition\u201d tab in the EIA-923 excel files disposition type]_2011_2016\u201d) and overall total .Samples original:AK_Coal_samples_AGDB_USGS: The original data were extracted from the \u201cMain\u201d, \u201cChemistry\u201d, and \u201cParameters\u201d tables within the Alaska Geochemical Database (geodatabase format) AK_Holitina_Basin_Coal_samples_ADGGS: The original data were filtered for the term \u201ccoal\u201d from the \u201cri2015\u20133-rock-eval-toc\u201d CSV file AK_Jarvis_creek_coal_samples_ADGGS: The original data were extracted from the \u201cpir2018\u20132-jarvis-creek-coal-proximate-analysis\u201d, \u201cpir2018\u20132-jarvis-creek-coal-ultimate-analysis\u201d, and \u201cpir2018\u20132-jarvis-creek-coal-rock-eval-ro-toc\u201d CSV files AR_Coal_samples_AGS: The original data were copied from \u201chttps://www.geology.arkansas.gov/energy/coal-in-arkansas.html under the \u201cChemical Analysis\u201d heading. The entire table was pasted into an excel spreadsheet, converted to a CSV and file geodatabase table.Coal_Ash_samples_NETL: The original data were extracted from the \u201cResults (whole Sample Conc.)\u201d and \u201cResults (Ash Based Conc.)\u201d tables or tabs within the \u201ccollected-samples-spreadsheet-v051515\u201d excel file Coal_CCP_samples_USGS: The original data Coal_samples_NaCQI_USGS: The original data were extracted from the \u201cDescriptive data\u201d, \u201cOxide analyses\u201d, \u201cWhole Coal \u2013 Remnant moisture\u201d, \u201cWhole Coal \u2013 Dry basis\u201d, and \u201cProximate-Ultimate analyses\u201d tables or tabs within the \u201cNaCQI\u201d excel file Coal_samples_NGDB_USGS: The original data were extracted from the \u201cGEODATA\u201d, \u201cNAA\u201d, \u201cOTHER\u201d, and \u201cUNKNOWN\u201d dbf files from the National Geochemical Database for Rocks Coal_samples_PSU_Energy_Institute_Coal_Bank: The original data were downloaded from the PSU Energy Institute Coal Sample Bank COALQUAL_USGS: The original data were extracted from the \u201cCOALQUAL\u201d point feature class, \u201cOxide\u201d, \u201cProximate_Ultimate\u201d, and \u201cTrace_Elements\u201d and file geodatabase tables within the \u201cCOALQUAL\u201d geodatabase Fly_Ash_Samples_Taggart: The original data were downloaded from the supplementary data associated with the Taggart et\u00a0al. (2016) journal publication IL_Coal_quality_samples_ISGS: The original data was directly converted from the \u201ccoal-quality-nonconf\u201d excel file IN_Coal_quality_samples_DB_IGWS: The dataset KY_Coal_quality_samples_KGS: The original data were extracted from the \u201cborehole_12,182,019_19,372\u201d, \u201cphysicalPropsAnaly\u201d, \u201cproximateAnaly\u201d, \u201cultimateAnaly\u201d, \u201cwholeTraceAnaly\u201d, and \u201cashTraceAnaly\u201d excel files from the KGS. All but the 2 \u201cTraceAnaly\u201d tables were joined using the \u201csample_number\u201d field and saved out into a single CSV file OK_Coal_quality_samples_OGS: The original data were extracted from the \u201cData-Coal-Analytical-Header\u201d and \u201cData-Coal-Analytical-Data\u201d excel files from the OGS PRB_Trace_elements_Stratigraphy_USGS: The original data was directly converted from the \u201cwaptg\u201d shapefile WY_Coal_samples_WSGS: The original data was extracted from the \u201cAppendix 1\u201d, \u201cAppendix 2\u201d, \u201cAppendix 3\u201d tabs within the \u201cri-71-ap\u201d excel file from the WSGS Samples integrated:Samples_All: This working dataset was created by manually mapping the fields within the \u201cSamples original\u201d datasets (CSV files) to a new single table schema (Supplementary File 4) in Excel. With the field mapping complete, the new table (CSV file) was populated with the new schema using several python scripts (Supplementary File 4). The resulting CSV file was converted into a file geodatabase table. Additionally, fields with all null values were deleted from the table.Samples_spatial: This working dataset was created by converting the available latitude and longitude coordinates from the \u201cSamples_All\u201d dataset to a point feature class .The authors declare that the work described is original and has not been submitted elsewhere for publication. No conflict of interest exists in this submission.Devin Justman: Methodology, Software, Investigation, Data curation, Writing \u2013 original draft, Visualization. Michael Sabbatino: Software, Data curation, Methodology, Writing \u2013 review & editing. Scott Montross: Conceptualization, Writing \u2013 review & editing. Scott Pantaleone: Writing \u2013 review & editing. Andrew Bean: Writing \u2013 review & editing. Kelly Rose: Conceptualization, Supervision, Project administration, Funding acquisition. Randal B. Thomas: Conceptualization, Supervision, Writing \u2013 review & editing.The authors declare that they have no known competing financial interests or personal relationships which have or could be perceived to have influenced the work reported in this article."} +{"text": "Periophthalmus modestus, is one of the mudskippers, which are the largest group of amphibious teleost fishes, which are uniquely adapted to live on mudflats. Because mudskippers can survive on land for extended periods by breathing through their skin and through the lining of the mouth and throat, they were evaluated as a model for the evolutionary sea-land transition of Devonian protoamphibians, ancestors of all present tetrapods.The shuttles hoppfish (mudskipper), Ab initio and homology-based gene prediction identified 30,505 genes, of which 94% had homology to the 14 Actinopterygii transcriptomes and 89% and 85% to Pfam familes and InterPro domains, respectively. Comparative genomics with 15 Actinopterygii species identified 59,448 gene families of which 12% were only in P. modestus.A total of 39.6, 80.2, 52.9, and 33.3 Gb of Illumina, Pacific Biosciences, 10X linked, and Hi-C data, respectively, was assembled into 1,419 scaffolds with an N50 length of 33 Mb and BUSCO score of 96.6%. The assembly covered 117% of the estimated genome size (729 Mb) and included 23 pseudo-chromosomes anchored by a Hi-C contact map, which corresponded to the top 23 longest scaffolds above 20 Mb and close to the estimated one. Of the genome, 43.8% were various repetitive elements such as DNAs, tandem repeats, long interspersed nuclear elements, and simple repeats. We present the high quality of the first genome assembly and gene annotation of the shuttles hoppfish. It will provide a valuable resource for further studies on sea-land transition, bimodal respiration, nitrogen excretion, osmoregulation, thermoregulation, vision, and mechanoreception. Mudskippers are of the subfamily Oxudercinae and the family Oxudercidae, which was recently separated from the family Gobiidae\u00a0, and theBoleophthalmus pectinirostris is useful as a draft genome.The family Oxudercidae has 10 genera and 42 species in FishBase. Among them, 4 species have been sequenced for the draft genome\u00a0[Periophthalmus modestus using Pacific Biosciences (PacBio) long-read, Illumina short-read, 10X linked read, and Hi-C sequencing. P. modestus\u00a0[P. modestus can reach a length of 10 cm , in May 2018. Total DNA was isolated from the muscle of P. modestus using the DNeasy Blood & Tissue kit , following the manufacturer\u2019s protocol.RRID:SCR_018059) with the same PCR primer set. The sequence data were edited and aligned using the ATGC 4.0 software .For species identification, the mitochondrial DNA cytochrome b gene barcode region was amplified using PCR as described in . The PCROrgans of specimens collected in July 2019 were manually dissected for eye, brain, liver, gut, muscle, and fin tissues, and total RNA was extracted from the dissected organs using the RNeasy Mini Kit . The RNA preparation was repeated 3 times, and then 3-replicate RNA samples were mixed and processed for RNA sequencing (RNA-seq) and isoform sequencing (Iso-seq).RRID:SCR_016386). For long-read sequencing, a 20-kb SMRTbell library was prepared and sequenced on a PacBio Sequel using 11 cells. To increase continuity in the genome assembly, we further produced linked reads and Hi-C reads. For linked-read sequencing, a 10X Chromium genome v2 library was constructed and sequenced on an Illumna NovaSeq 6000 instrument. For long-range scaffolding, a Dovetail Hi-C library was prepared with Dovetail Hi-C Library kit and sequenced on an Illumina NovaSeq 6000 instrument .For short-read sequencing, a paired-end library with insert sizes of 550\u00a0bp was constructed using Illumina TruSeq DNA Nano Prep Kit and sequenced on an Illumina HiSeq 4000 instrument . For PacBio Iso-seq, 3 libraries of length 1\u20132, 2\u20133, and 3\u20136\u00a0kb were prepared from polyadenylated RNA according to the PacBio Iso-seq protocol . Six SMRT cells were run on a PacBio RS II system .For RNA-seq, paired-end libraries with insert size of 150\u00a0bp were prepared with the Truseq mRNA Prep kit from total messenger RNA (mRNA), which was subsequently sequenced on an Illumina HiSeq 2500 \u00a0[RRID:SCR_005491)\u00a0[RRID:SCR_017014)\u00a0[Trimmomatic \u00a0 was used_005491)\u00a0 generateRRID:SCR_018550)\u00a0[RRID:SCR_017642)\u00a0[RRID:SCR_018550) using PacBio long reads, and further polished using Pilon \u00a0[RRID:SCR_010910)\u00a0[RRID:SCR_007936) and PCR duplicates were marked using Novosort\u00a0[RRID:SCR_001228)\u00a0[RRID:SCR_015008)\u00a0[RRID:SCR_021173)\u00a0[MiniASM\u00a0 assemble_018550)\u00a0 using Pa_017642)\u00a0 with the_014731)\u00a0 with the_010910)\u00a0 using Il_010910)\u00a0. Dovetai_010910)\u00a0 linked tNovosort\u00a0. Then Hi_001228)\u00a0 accessed_021173)\u00a0 purged hRRID:SCR_012954)\u00a0[de novo library built by RepeatModeler \u00a0[RRID:SCR_021169)\u00a0[Repeats were predicted in 3 ways. Tandem Repeats Finder\u00a0 identifi_012954)\u00a0 identifi_015027)\u00a0 and withde novo, RNA-based and homology-based methods to carry out protein-coding gene prediction. For the de novo and RNA-based gene prediction, Illumina RNA-seq and PacBio Iso-seq datasets were used to generate 2 hint files. Tophat \u00a0[RRID:SCR_008992)\u00a0[RRID:SCR_008417)\u00a0[RRID:SCR_018964)\u00a0[RRID:SCR_015661)\u00a0[RRID:SCR_008417). GeneMark-ET predicts genes with unsupervised training, whereas AUGUSTUS predicts genes with supervised training based on intron and protein hints.We combined _013035)\u00a0 aligned _013035)\u00a0 correcte_008417)\u00a0 generate_018964)\u00a0 predicte_015661)\u00a0 and AUGUP. modestus was aligned against the genes of 14 Actinopterygii genomes (RRID:SCR_011980) using TBLASTN \u00a0[RRID:SCR_020951)\u00a0[RRID:SCR_016088)\u00a0[ab initio prediction only when there was no conflict. Then the merged genes were removed if their coding sequences contained premature stop codons or were not supported by hints. InterProScan \u00a0[RRID:SCR_007701)\u00a0[RRID:SCR_004726)\u00a0[RRID:SCR_003352)\u00a0[RRID:SCR_003412)\u00a0[RRID:SCR_006969)\u00a0[RRID:SCR_003457)\u00a0[RRID:SCR_007952)\u00a0[RRID:SCR_005493)\u00a0[For the homology-based gene prediction, the assembly of genomes and vert_011822)\u00a0 with an _020951)\u00a0 clustere_016088)\u00a0. Finally_005829)\u00a0 annotate_007701)\u00a0, Pfam (P_004726)\u00a0, PIRSF (_003352)\u00a0, PRINTS _006969)\u00a0, PROSITE_003457)\u00a0, SUPERFA_007952)\u00a0, and TIGRRID:SCR_011809)\u00a0[RRID:SCR_017075)\u00a0[RRID:SCR_010835)\u00a0[To predict non-coding genes, Infernal \u00a0, RNAmmer_017075)\u00a0, and tRN_010835)\u00a0 were useRRID:SCR_007839)\u00a0[RRID:SCR_002811) enrichment was performed using the Fisher exact test and false discovery rate correction to identify functionally enriched GO terms among gene families relative to the \u201cgenome background,\u201d as annotated by Pfam.Chromeister\u00a0 performe_007839)\u00a0 identifiRRID:SCR_011812)\u00a0[RRID:SCR_017334)\u00a0[RRID:SCR_006086)\u00a0[RRID:SCR_000667)\u00a0[RRID:SCR_005983)\u00a0[For phylogenetic analysis and divergence time estimation, MUSCLE \u00a0 aligned _017334)\u00a0 filtered_006086)\u00a0 construc_000667)\u00a0 calculat_005983)\u00a0 with thehttp://www.ncbi.nlm.nih.gov/) showed >99% sequence identity to P. modestus (GenBank accession No. DQ901364.1), 89% to Periophthalmus argentilineatus (AP019359.1), and 85% to Periophthalmus barbarus (KF415633.1).Comparison of cytochrome b sequences against the NCBI GenBank database of Illumina, PacBio, 10X linked, and Hi-C data, respectively, for genome sequencing . The genP. modestus with 16 Actinopterygii species for repeats with 25 types . RNAmmerP. modestus and Periophthalmus magnuspinnatus had the lowest score, meaning the closest pair. The second and third lowest score corresponded to the pair of Boleophthalmus pectinirostris with P. magnuspinnatus and P. modestus, respectively. Note that the scores of Danio rerio and Lepisosteus oculatus with the others were >0.99 because of the evolutionary distances.The 17 Actinopterygii genomes were comAstatotilapia calliptera, Anabas testudineus, B. pectinirostris, D. rerio, Esox lucius, Gastersteus aculeatus, Kryptolebias marmoratus, Lates calcarifer, L. oculatus, Oryzias latipes, P. magnuspinnatus, P. modestus, Scophthalmus maximus, Tetraodon nigroviridis, and Takifugu rubireps, respectively. As shown in Fig.\u00a0P. modestus had more families than the others and the number of common families in \u226513 species were dominant. The unique gene families of P. modestus were enriched in negative regulation of RNA metabolic and biosynthetic process, nucleic acid-templated, transcription DNA-templated, nucleobase-containing, biosynthetic process, and cellular macromolecule , respectively.All genomes had 281 single-copy orthologous gene families, which were used to construct a phylogenetic tree and estimate divergence time. The TimeTree database\u00a0 was usedP. modestus with its common ancestor were 411 and 225, while those of P. magnuspinnatus, the closest genome, were 257 and 442, respectively .J.H.C. and Y.Y. conceived the concept; H.Y.S., S.J., and S.H.J. collected and classified the sample; J.H.C. and Y.Y. designed the experiments; J.Y.Y., S.H.B., Y.Y., and J.H.C. analyzed the genomic data; S.H.B. and Y.Y. deposited the data into NCBI; and H.Y.S., Y.Y., and J.H.C. wrote the manuscript. All authors reviewed the manuscript.giab089_GIGA-D-21-00233_Original_SubmissionClick here for additional data file.giab089_GIGA-D-21-00233_Revision_1Click here for additional data file.giab089_GIGA-D-21-00233_Revision_2Click here for additional data file.giab089_GIGA-D-21-00233_Revision_3Click here for additional data file.giab089_Response_to_Reviewer_Comments_Revision_2Click here for additional data file.giab089_Reviewer_1_Report_Original_SubmissionJiang Hu -- 9/1/2021 ReviewedClick here for additional data file.giab089_Reviewer_2_Report_Original_SubmissionYunyun Lv -- 9/7/2021 ReviewedClick here for additional data file.giab089_Reviewer_2_Report_Revision_1Yunyun Lv -- 11/9/2021 ReviewedClick here for additional data file.giab089_Supplemental_Figures_and_TablesClick here for additional data file."} +{"text": "This also falsely reduces PVPI and GEF. One of these studies suggested a correction formula for femoral venous access that markedly reduced the bias for GEDV. Consequently, the last PiCCO-algorithm requires information about the CVC, and correction for femoral access has been shown. However, two recent studies demonstrated inconsistencies of the last PiCCO algorithm using incorrected GEDV for PVPI, but corrected GEDV for GEF. Nevertheless, these studies were based on mathematical analyses of data displayed in a total of 15 patients equipped with only a femoral, but not with a jugular CVC.Transpulmonary thermodilution (TPTD) is used to derive cardiac output CO, global end-diastolic volume GEDV and extravascular lung water EVLW. To facilitate interpretation of these data, several ratios have been developed, including pulmonary vascular permeability index (defined as EVLW/(0.25*GEDV)) and global ejection fraction ((4*stroke volume)/GEDV). PVPI and GEF have been associated to the aetiology of pulmonary oedema and systolic cardiac function, respectively. Several studies demonstrated that the use of Therefore, this study compared PVPI_fem and GEF_fem derived from femoral TPTD to values derived from jugular indicator injection in 25 patients with both jugular and femoral CVCs.54 datasets in 25 patients were recorded. Each dataset consisted of three triplicate TPTDs using the jugular venous access as the gold standard and the femoral access with (PVPI_fem_cor) and without (PVPI_fem_uncor) information about the femoral indicator injection to evaluate, if correction for femoral GEDV pertains to PVPI_fem and GEF_fem.PVPI_fem_uncor was significantly lower than PVPI_jug . Similarly, PVPI_fem_cor was significantly lower than PVPI_jug . This is explained by the finding that PVPI_fem_uncor was not different to PVPI_fem_cor . This clearly suggests that correction for femoral CVC does not pertain to PVPI.GEF_fem_uncor was significantly lower than GEF_jug . By contrast, GEF_fem_cor was not different to GEF_jug . Furthermore, GEF_fem_cor was significantly higher than GEF_fem_uncor . This finding emphasizes that an appropriate correction for femoral CVC is applied to GEF_fem_cor.2/821mL/m2; 129%). This further emphasizes that GEF, but not PVPI is corrected in case of femoral indicator injection.The extent of the correction for GEF and the relation of PVPI_jug/PVPI_fem_uncor are in the same range as the ratio of GEDVI_fem_uncor/GEDVI_fem_cor (1056ml/mFemoral indicator injection for TPTD results in significantly lower values for PVPI and GEF. While the last PiCCO algorithm appropriately corrects GEF, the correction is not applied to PVPI. Therefore, GEF-values can be used in case of femoral CVC, but PVPI-values are substantially underestimated. PVPI_fem_uncor_form was calculated by multiplying PVPI_fem_uncor with the ratio 0.25*GEDVuncorrected/0.25*GEDVcorrected.PVPI_fem_uncor_form was corrected using the formula suggested for correction of femoral indicator injection derived GEDVI: GEDVI(TIF)Click here for additional data file.S2 Fig(TIF)Click here for additional data file."} +{"text": "Scientific Reports7: Article number: 4362310.1038/srep43623; published online: 03022017; updated: 05042017This Article contains errors in Reference 11 which was incorrectly given as:http://www.who.int/influenza/human_animal_interface/faq_H7N9/en/ (2014).World Health Organisation. Frequently Asked Questions on human infection caused by the avian influenza A(H7N9) virus. URL The correct reference is listed below:http://www.who.int/influenza/human_animal_interface/virology_laboratories_and_vaccines/influenza_virus_infections_humans_feb14.pdf (2014).World Health Organisation. Influenza virus infections in humans (February 2014). URL"} +{"text": "Podoviridae family and vB_KpnM_BIS47 of the Myoviridae family, which act against animal-pathogenic Klebsiella pneumoniae strains, were isolated from sewage plants in Poland. They possess double-stranded DNA genomes of 41,697\u00a0bp, 41,335\u00a0bp, 40,605\u00a0bp, and 147,443\u00a0bp, respectively.Four lytic phages, vB_KpnP_BIS33, vB_KpnP_IL33, and vB_KpnP_PRA33 of the Klebsiella pneumoniae is a facultative anaerobic Gram-negative bacterium that causes a wide range of diseases in humans and in domestic and farm animals. As a pathogen, it can acquire resistance to carbapenems, which are often considered to be drugs of last resort (K.\u00a0pneumoniae infections might be phage therapy. Lytic bacteriophages (phages) and/or their gene products, such as lysins, can easily be used as therapeutic agents against bacteria, as they are host specific and generally show no side effects.t resort . One proKlebsiella phages, vB_KpnP_BIS33, vB_KpnP_IL33, vB_KpnP_PRA33, and vB_KpnM_BIS47, were isolated by a standard enrichment method , I\u0142awa (IL33), and Prabuty (PRA33) in Poland. Genomic DNA from the phages was isolated with a modified phenol-chloroform extraction method were identified also for putative homing endonuclease, helicase, DNA ligase, and DNA polymerase. Additionally, CDSs for terminase, head-tail connector protein, collar protein, putative tail tubular proteins, and tail fiber protein were found. Bacteriophages vB_KpnP_BIS33 and vB_KpnP_IL33 possess their own RNA polymerase, suggesting that they are related to phage T7. The CDSs for holin and therapeutically desired endolysin were detected in all four genomes. Lysogenization genes, such as site-specific integrases and repressors, were not identified in any of the four genomes. Additionally, genome annotation of vB_KpnM_BIS47 revealed 18 tRNA genes.Whole-genome sequence alignments with BLASTn and moleNucleotide sequence accession numbers are shown in"} +{"text": "ACS Synthetic Biology abstracts (Volumes 1 to 6 and Volume 7 issue 1) was performed as well as extraction from the following databases: Bionemo v6.0 [1], RegTransbase r20120406 [2], RegulonDB v9.0 [3], RegPrecise v4.0 [4] and Sigmol v20180122 [5].The aim of this dataset is to identify and collect compounds that are known for being detectable by a living cell, through the action of a genetically encoded biosensor and is centred on bacterial transcription factors. Such a dataset should open the possibility to consider a wide range of applications in synthetic biology. The reader will find in this dataset the name of the compounds, their InChI (molecular structure), the publication where the detection was reported, the organism in which this was detected or engineered, the type of detection and experiment that was performed as well as the name of the biosensor. A comment field is also provided that explains why the compound was included in the dataset, based on quotes from the reference publication or the database it was extracted from. Manual curation of Specifications TableValue of the data\u2022This dataset provides a basis for the development of new biosensing circuits for synthetic biology and metabolic engineering applications, e.g. the design of whole-cell biosensor, high-throughput screening experiments, dynamic regulation of metabolic pathways, transcription factor engineering or creation of sensing-enabling pathways.\u2022This dataset provides a unique source of a broad number of compounds that can be detected and acted upon by a cell, increasing the possibility of orthogonal circuit design from the few usual compounds used in those applications.\u2022The manually curated section provides information on where the biosensor has been first reported and successfully used, enabling the reader to select trustworthy information for his application of choice.\u2022Detectable compounds can be searched by both by name and chemical similarity.\u2022This dataset is an update of [10.6084/m9.figshare.3144715.v1].1ACS Synthetic Biology abstracts (Volumes 1 to 6 and Volume 7 issue 1) was performed as well as extraction from the following databases: Bionemo v6.0 The aim of this dataset is to identify and collect compounds that are known for being detectable by a living cell, through the action of a genetically encoded biosensor and is centred on bacterial transcription factors. The dataset should allow the synthetic biology community to consider a wide range of applications. The reader will find in this dataset the name of the compounds, their InChI (molecular structure), the publication where the detection was reported, the organism in which this was detected or engineered, the type of detection and experiment that was performed as well as the name of the biosensor. A comment field is also provided that explains why the compound was included in the dataset, based on quotes from the reference publication or the database it was extracted from. Manual curation of This dataset is available online on GitHub to allow for further updates as well as community contributions.2\u2022Manual curation of ACS Synthetic Biology (Volume 1\u20136 and Volume 7 issue 1):ACS Synthetic Biology (Volume 1\u20136 and Volume 7 issue 1) were read and information relevant to this dataset was extracted from those abstracts. The aim of this manual curation was to establish a list of detectable compounds whose detection method was already successfully implemented in a synthetic circuit, providing a good basis for further implementation for synthetic biologists.All abstracts of \u2022Bionemo v6.0The SQL request used to create this dataset is:SELECT DISTINCT substrate.id_substrate, minesota_code, name FROM substrateINNER JOIN complex_substrate ON complex_substrate.id_substrate=substrate.id_substrateINNER JOIN complex ON complex.id_complex=complex_substrate.id_complexWHERE activity='REG';\u2022RegTransbase r20120406The SQL request used to create this dataset is:SELECT DISTINCT a.pmid, e.name, r.nameFROM regulator2effectors AS reINNER JOIN exp2effectors AS ee ON ee.effector_guid=re.effector_guidINNER JOIN dict_effectors AS e ON e.effector_guid=ee.effector_guidINNER JOIN regulators AS r ON r.regulator_guid=re.regulator_guidINNER JOIN articles AS a ON a.art_guid=ee.art_guidORDER BY e.name;RegTransbase was not maintained anymore at the time of writing of this manuscript.\u2022RegulonDB v9.0The SQL request used to create this dataset is:SELECT c.conformation_id, c.final_state, e.effector_id, e.effector_name, tf.transcription_factor_id, tf.transcription_factor_name, p.reference_id, xdb.external_db_nameFROM effector AS eINNER JOIN conformation_effector_link AS mm_ce ON mm_ce.effector_id=e.effector_idLEFT JOIN conformation AS c ON c.conformation_id=mm_ce.conformation_idLEFT JOIN transcription_factor AS tf ON tf.transcription_factor_id=c.transcription_factor_idLEFT JOIN object_ev_method_pub_link AS x ON x.object_id=c.conformation_id OR x.object_id=tf.transcription_factor_id OR x.object_id=e.effector_idLEFT JOIN publication AS p ON p.publication_id=x.publication_idLEFT JOIN external_db AS xdb ON xdb.external_db_id=p.external_db_idWHERE c.interaction_type IS Null OR c.interaction_type!='Covalent';\u2022RegPrecise v4.0The RegPrecise website was accessed (version v4.0) and all relevant data was extracted from the effector pages of the website.\u2022Sigmol v20170216Quorum Sensing Signaling Molecule page. In the \u201cdetected by\u201d column, we provide the class of signaling compounds the compound belongs to. The comment field reads \u2018Extracted from Sigmol v20170216 \u2013 Uniq_QSSM_\u201cnumber\u201d\u2019.Sigmol was accessed on 16/02/2017 and all effector data was retrieved from the unique 2.1In in vivo, unspecified or other), as well as the repartition of Biosensor type in the full dataset and the manually curated dataset from ACS Synthetic Biology."} +{"text": "The URL at the end of the Materials and Methods section directing readers to download the data is incorrect.https://figshare.com/articles/Survey_of_biologist_s_computational_needs/4643641The correct URL is:"} +{"text": "Gene set enrichment analysis is a popular approach for prioritising the biological processes perturbed in genomic datasets. The Bioconductor project hosts over 80 software packages capable of gene set analysis. Most of these packages search for enriched signatures amongst differentially regulated genes to reveal higher level biological themes that may be missed when focusing only on evidence from individual genes. With so many different methods on offer, choosing the best algorithm and visualization approach can be challenging. The EGSEA package solves this problem by combining results from up to 12 prominent gene set testing algorithms to obtain a consensus ranking of biologically relevant results.This workflow demonstrates how EGSEA can extend limma-based differential expression analyses for RNA-seq and microarray data using experiments that profile 3 distinct cell populations important for studying the origins of breast cancer. Following data normalization and set-up of an appropriate linear model for differential expression analysis, EGSEA builds gene signature specific indexes that link a wide range of mouse or human gene set collections obtained from MSigDB, GeneSetDB and KEGG to the gene expression data being investigated. EGSEA is then configured and the ensemble enrichment analysis run, returning an object that can be queried using several S4 methods for ranking gene sets and visualizing results via heatmaps, KEGG pathway views, GO graphs, scatter plots and bar plots.\u00a0Finally, an HTML report that combines these displays can fast-track the sharing of results with collaborators, and thus expedite downstream biological validation.\u00a0EGSEA is simple to use and can be easily integrated with existing gene expression analysis pipelines for both human and mouse data. In an effort to unify these computational methods and knowledge-bases, theEGSEA R/Bioconductor package was developed. EGSEA, which stands forEnsemble of Gene Set Enrichment Analyses5 combines the results from multiple algorithms to arrive at a consensus gene set ranking to identify biological themes and pathways perturbed in an experiment. EGSEA calculates seven statistics to combine the individual gene set statistics of base GSE methods to rank biologically relevant gene sets. The current version of theEGSEA package6 utilizes the analysis results of up to twelve prominent GSE algorithms that include:ora7,globaltest8,plage9,safe10,zscore11,gage12,ssgsea13,padog14,gsva15,camera16,roast17 andfry17. Theora,gage,camera andgsva methods depend on acompetitive null hypothesis which assumes the genes in a set do not have a stronger association with the experimental condition compared to randomly chosen genes outside the set. The remaining eight methods are based on aself-contained null hypothesis that only considers genes within a set and again assumes that they have no association with the experimental condition.Gene set enrichment analysis allows researchers to efficiently extract biological insights from long lists of differentially expressed genes by interrogating them at a systems level. In recent years, there has been a proliferation of gene set enrichment (GSE) analysis methods released through the Bioconductor projectEGSEAdata package that includes more than 25,000 gene sets for human and mouse organised according to their database sources and c1\u2013c7) that explore different biological themes ranging from very broad through to more specialised ones focusing on cancer and immunology (c7). The other main sources are GeneSetDB3 and KEGG4 which have similar collections focusing on different biological characteristics 18 that consists of 3 cell populations and Mature Luminal (ML)) sorted from the mammary glands of female virgin mice. Triplicate RNA samples from each population were obtained in 3 batches and sequenced on an Illumina HiSeq 2000 using a 100 base-pair single-ended protocol. Raw sequence reads from the fastq files were aligned to the mouse reference genome (mm10) using theRsubread package19. Next, gene-level counts were obtained usingfeatureCounts20 based onRsubread\u2019s built-inmm10 RefSeq-based annotation. The raw data along with further information on experimental design and sample preparation can be downloaded from the Gene Expression Omnibus using GEO Series accession number GSE63310 and will be preprocessed according to the RNA-seq workflow published by Lawet al. (2016)21.The first experiment analysed in this workflow is an RNA-seq dataset from Sheridanet al. (2010)22 and is the microarray equivalent of the RNA-seq dataset mentioned above. The same 3 populations , LP and ML) were sorted from mouse mammary glands via flow cytometry. Total RNA from 5 replicates of each cell population were hybridised onto 3 Illumina MouseWG-6 v2 BeadChips. The intensity files and chip annotation file available in Illumina\u2019s proprietary formats (IDAT and BGX respectively) can be downloaded fromhttp://bioinf.wehi.edu.au/EGSEA/arraydata.zip. The raw data from this experiment is also available from GEO under Series accession number GSE19446.The second experiment analysed in this workflow comes from Limet al. (2016) which performs a differential gene expression analysis on this data set using the Bioconductor packagesedgeR23,limma24 andGlimma25 with gene annotation from theMus.musculus package26. Thelimma package offers a well-developed suite of statistical methods for dealing with differential expression for both microarray and RNA-seq datasets and will be used in the analyses of both datasets presented in this workflow.Our RNA-seq analysis follows on directly from the workflow of Lawhttp://bioinf.wehi.edu.au/EGSEA/mam.rnaseq.rdata. The code below loads the preprocessed count matrix from Lawet al. (2016), performs TMM normalisation27 on the raw counts, and calculates voom weights for use in comparisons of gene expression between Basal and LP, Basal and ML, and LP and ML populations.To get started with this analysis, download the R data file from> library(limma)> library(edgeR)> load(\"mam.rnaseq.rdata\")> names(mam.rnaseq.data)[1] \"samples\" \"counts\" \"genes\"> dim(mam.rnaseq.data)[1] 14165 9> x = calcNormFactors> design = model.matrix(~0+x$samples$group+x$samples$lane)> colnames(design) = gsub)> colnames(design) = gsub)> head(design) Basal LP ML L006 L008\t 1 0 1 0 0 0\t 2 0 0 1 0 0\t 3 1 0 0 0 0\t 4 1 0 0 1 0\t 5 0 0 1 1 0\t 6 0 1 0 1 0\t > contr.matrix = makeContrasts)> head(contr.matrix) Contrasts Levels BasalvsLP BasalvsML LPvsML\t Basal\t 1\t 1\t 0\t LP\t -1\t 0\t 1\t ML\t 0\t -1\t-1\t L006\t 0\t 0\t 0\t L008\t 0\t 0\t 0 voom function28 from thelimma package converts counts to log-counts-per-million (log-cpm) and calculates observation-level precision weights. Thevoom object (v) contains normalized log-cpm values and gene information used by all of the methods in the EGSEA analysis below. The precisionweights stored withinv are also used by thecamera,roast andfry gene set testing methods.The> v = voom > names(v) [1] \"genes\" \"targets\" \"E\" \"weights\" \"design\" et al. (2016), as a detailed explanation of these steps is beyond the scope of this article.For further information on preprocessing see Lawvoom object (v), a design matrix (design) and an optional contrasts matrix (contr.matrix). The design matrix describes how the samples in the experiment relate to the coefficients estimated by the linear model29. The contrasts matrix then compares two or more of these coefficients to allow relative assessment of differential expression. Base methods that utilize linear models such as those fromlimma andGSVA make use of the design and contrasts matrices directly. For methods that do not support linear models, these two matrices are used to extract the group information for each comparison.The EGSEA algorithm makes use of theEGSEAdata includes more than 25,000 gene sets organized in collections depending on their database sources. Summary information about the gene set collections available inEGSEAdata can be displayed as follows:The package> library(EGSEAdata)> egsea.data(\"mouse\")The following databases are available in EGSEAdata for Mus musculus: Database name: KEGG PathwaysVersion: NADownload/update date: 07 March 2017Data source: gage::kegg.gsetsSupported species: human, mouse, ratGene set collections: Signaling, Metabolism, DiseaseRelated data objects: kegg.pathwaysNumber of gene sets in each collection for Mus musculus :Signaling: 132Metabolism: 89Disease: 67 Database name: Molecular Signatures Database (MSigDB)Version: 5.2Download/update date: 07 March 2017Data source: http://software.broadinstitute.org/gseaSupported species: human, mouseGene set collections: h, c1, c2, c3, c4, c5, c6, c7Related data objects: msigdb, Mm.H, Mm.c2, Mm.c3, Mm.c4, Mm.c5, Mm.c6, Mm.c7Number of gene sets in each collection for Mus musculus :h Hallmark Signatures: 50c2 Curated Gene Sets: 4729c3 Motif Gene Sets: 836c4 Computational Gene Sets: 858c5 GO Gene Sets: 6166c6 Oncogenic Signatures: 189c7 Immunologic Signatures: 4872 Database name: GeneSetDB DatabaseVersion: NADownload/update date: 15 January 2016Data source: http://www.genesetdb.auckland.ac.nz/Supported species: human, mouse, ratGene set collections: gsdbdis, gsdbgo, gsdbdrug, gsdbpath, gsdbregRelated data objects: gsetdb.human, gsetdb.mouse, gsetdb.ratNumber of gene sets in each collection for Mus musculus :GeneSetDB Drug/Chemical: 6019GeneSetDB Disease/Phenotype: 5077GeneSetDB Gene Ontology: 2202GeneSetDB Pathway: 1444GeneSetDB Gene Regulation: 201 Type ? to get a specific informationabout it, e.g., ?kegg.pathways. ?) command, for instance?Mm.c2 will return more information on the mouse version of the c2 collection from MSigDB. The above information can be returned as a list:As the output above suggests, users can obtain help on any of the collections using the standard R help > names(info)[1] \"kegg\" \"msigdb\" \"gsetdb\"> info$msigdb$info$collections[1] \"h\" \"c1\" \"c2\" \"c3\" \"c4\" \"c5\" \"c6\" \"c7\" EGSEA package, the KEGG pathways, c2 (curated gene sets) and c5 (Gene Ontology gene sets) collections from the MSigDB database are selected. Next, an index is built for each gene set collection using the EGSEA indexing functions to link the genes in the different gene set collections to the rows of our RNA-seq gene expression matrix. Indexes for the c2 and c5 collections from MSigDB and for the KEGG pathways are built using thebuildIdx function which relies on Entrez gene IDs as its key. In theEGSEAdata gene set collections, Entrez IDs are used as they are widely adopted by the different source databases and tend to be more consistent and robust since there is one identifier per gene in a gene set. It is also relatively easy to convert other gene IDs into Entrez IDs.To highlight the capabilities of the> library(EGSEA)> gs.annots = buildIdx, go.part = TRUE)[1] \"Loading MSigDB Gene Sets ... \"[1] \"Loaded gene sets for the collection c2 ...\"[1] \"Indexed the collection c2 ...\"[1] \"Created annotation for the collection c2 ...\"[1] \"Loaded gene sets for the collection c5 ...\"[1] \"Indexed the collection c5 ...\"[1] \"Created annotation for the collection c5 ...\"MSigDB c5 gene set collection has been partitioned intoc5BP, c5CC, c5MF[1] \"Building KEGG pathways annotation object ... \"> names(gs.annots)[1] \"c2\" \"c5BP\" \"c5CC\" \"c5MF\" \"kegg\" summary,show andgetSetByName (orgetSetByID) can be invoked on an object of classGSCollectionIndex, which stores all of the relevant gene set information, as follows:To obtain additional information on the gene set collection indexes, including the total number of gene sets, the version number and date of last revision, the methods> class(gs.annots$c2)[1] \"GSCollectionIndex\"attr[1] \"EGSEA\"> summary(gs.annots$c2)c2 Curated Gene Sets (c2): 4726 gene sets - Version: 5.2, Update date: 07 March 2017> show(gs.annots$c2)An object of class \"GSCollectionIndex\"Number of gene sets: 4726Annotation columns: ID, GeneSet, BroadUrl, Description, PubMedID, NumGenes, ContributorTotal number of indexing genes: 14165Species: Mus musculusCollection name: c2 Curated Gene SetsCollection unique label: c2Database version: 5.2Database update date: 07 March 2017> s = getSetByNameID: M13072GeneSet: SMID_BREAST_CANCER_LUMINAL_A_DNBroadUrl: http://www.broadinstitute.org/gsea/msigdb/cards/SMID_BREAST_CANCER_LUMINAL_A_DN.htmlDescription: Genes down-regulated in the luminal A subtype of breast cancer.PubMedID: 18451135NumGenes: 23/24Contributor: Jessica Robertson> class(s)[1] \"list\"> names(s)[1] \"SMID_BREAST_CANCER_LUMINAL_A_DN\"> names(s$SMID_BREAST_CANCER_LUMINAL_A_DN)[1] \"ID\" \"GeneSet\" \"BroadUrl\" \"Description\" \"PubMedID\"[6] \"NumGenes\" \"Contributor\" GSCollectionIndex store for each gene set the Entrez gene IDs in the slotoriginal, the indexes in the slotidx and additional annotation for each set in the slotanno.Objects of class> slotNames(gs.annots$c2)[1] \"original\" \"idx\" \"anno\" \"featureIDs\" \"species\"[6] \"name\" \"label\" \"version\" \"date\" buildCustomIdx,buildGMTIdx,buildKEGGIdx,buildMSigDBIdx andbuildGeneSetDBIdx can be also used to build gene set collection indexes. The functionsbuildCustomIdx andbuildGMTIdx were written to allow users to run EGSEA on gene set collections that may have been curated within a lab or downloaded from public databases and allow use of gene identifiers other than Entrez IDs. Example databases include, ENCODE Gene Set Hub (available fromhttps://sourceforge.net/projects/encodegenesethub/), which is a growing resource of gene sets derived from high quality ENCODE profiling experiments encompassing hundreds of DNase hypersensitivity, histone modification and transcription factor binding experiments30. Other resources include PathwayCommons (http://www.pathwaycommons.org/)31 and theKEGGREST32 package that provides access to up-to-date KEGG pathways across many species.Other EGSEA functions such asgenes data.frame of thevoom object as follows:Before an EGSEA test is carried out, a few parameters need to be specified. First, a mapping between Entrez IDs and Gene Symbols is created for use by the visualization procedures. This mapping can be extracted from the> colnames(v$genes)[1] \"ENTREZID\" \"SYMBOL\" \"CHR\"> symbolsMap = v$genes> colnames(symbolsMap) = c> symbolsMap = as.character baseMethods in the code below), which determines the individual algorithms that are used in the ensemble testing. The supported base methods can be listed using the functionegsea.base as follows:Another important parameter in EGSEA is the list of base GSE methods , which can be listed as follows:Since each base method generates different> egsea.combine[1] \"fisher\" \"wilkinson\" \"average\" \"logitp\" \"sump\" \"sumz\"[7] \"votep\" \"median\" Finally, the sorting of EGSEA results plays an essential role in identifying relevant gene sets. Any of EGSEA\u2019s combined scores or the rankings from individual base methods can be used for sorting the results.> egsea.sort [1] \"p.value\" \"p.adj\" \"vote.rank\" \"avg.rank\" \"med.rank\" [6] \"min.pvalue\" \"min.rank\" \"avg.logfc\" \"avg.logfc.dir\" \"direction\"[11] \"significance\" \"camera\" \"roast\" \"safe\" \"gage\"[16] \"padog\" \"plage\" \"zscore\" \"gsva\" \"ssgsea\"[21] \"globaltest\" \"ora\" \"fry\" p.adj is the default option for sorting EGSEA results for convenience, we recommend the use of eithermed.rank orvote.rank because they efficiently utilize the rankings of individual methods and tend to produce fewer false positives5.Althoughegsea function that takes avoom object, a contrasts matrix, collections of gene sets and other run parameters as follows:Next, the EGSEA analysis is performed using the> gsa = egseaEGSEA analysis has started##------ Fri Jun 16 09:49:11 2017 ------##Log fold changes are estimated using limma package ...limma DE analysis is carried out ...Number of used cores has changed to 3in order to avoid CPU overloading.EGSEA is running on the provided data and c2 collectionEGSEA is running on the provided data and c5BP collectionEGSEA is running on the provided data and c5CC collectionEGSEA is running on the provided data and c5MF collectionEGSEA is running on the provided data and kegg collection##------ Fri Jun 16 09:57:56 2017 ------##EGSEA analysis took 525.812 seconds.EGSEA analysis has completed contrasts argument. If this parameter isNULL, all pairwise comparisons based onv$targets$group are created, assuming thatgroup is the primary factor in the design matrix. Likewise, all the coefficients of the primary factor are used if the design matrix has an intercept.In situations where the design matrix includes an intercept, a vector of integers that specify the columns of the design matrix to test using EGSEA can be passed to theEGSEA is implemented with parallel computing features enabled using theparallel package33 at both the method-level and experimental contrast-level. The running time of the EGSEA test depends on the base methods selected and whether report generation is enabled or not. The latter significantly increases the run time, particularly if the argumentdisplay.top is assigned a large value (> 20) and/or a large number of gene set collections are selected. EGSEA reporting functionality generates set-level plots for the top gene sets as well as collection-level plots.EGSEA package also has a function namedegsea.cnt, that can perform the EGSEA test using an RNA-seq count matrix rather than avoom object, a function namedegsea.ora, that can perform over-representation analysis with EGSEA reporting capabilities using only a vector of gene IDs, and theegsea.ma function that can perform EGSEA testing using a microarray expression matrix as shown later in the workflow.TheClasses used to manage the results. The output of the functionsegsea,egsea.cnt,egsea.ora andegsea.ma is an S4 object of classEGSEAResults. Several S4 methods can be invoked to query this object. For example, an overview of the EGSEA analysis can be displayed using theshow method as follows:> show(gsa)An object of class \"EGSEAResults\"Total number of genes: 14165Total number of samples: 9Contrasts: BasalvsLP, BasalvsML, LPvsMLBase GSE methods: camera (limma:3.32.2), safe (safe:3.16.0), gage (gage:2.26.0), padog (PADOG:1.18.0), plage (GSVA:1.24.1), zscore (GSVA:1.24.1), gsva (GSVA:1.24.1), ssgsea (GSVA:1.24.1),P-values combining method: wilkinsonSorting statistic: med.rankOrganism: Mus musculusHTML report generated: NoTested gene set collections:c2 Curated Gene Sets (c2): 4726 gene sets - Version: 5.2, Update date: 07 March 2017c5 GO Gene Sets (BP) (c5BP): 4653 gene sets - Version: 5.2, Update date: 07 March 2017c5 GO Gene Sets (CC) (c5CC): 584 gene sets - Version: 5.2, Update date: 07 March 2017c5 GO Gene Sets (MF) (c5MF): 928 gene sets - Version: 5.2, Update date: 07 March 2017KEGG Pathways (kegg): 287 gene sets - Version: NA, Update date: 07 March 2017EGSEA version: 1.5.2EGSEAdata version: 1.4.0Use summary(object) and topSets to explore this object. p-values derived from different GSE algorithms, the sorting statistic used and the size of each gene set collection. Note that the gene set collections are identified using the labels that appear in parentheses (e.g.c2) in the output ofshow.This command displays the number of genes and samples that were included in the analysis, the experimental contrasts, base GSE methods, the method used to combine theGetting top ranked gene sets. A summary of the top 10 gene sets in each collection for each contrast in addition to the EGSEA comparative analysis can be displayed using the S4 methodsummary as follows:> summary(gsa)**** Top 10 gene sets in the c2 Curated Gene Sets collection ****** Contrast BasalvsLP **LIM_MAMMARY_STEM_CELL_DN | LIM_MAMMARY_LUMINAL_PROGENITOR_UPMONTERO_THYROID_CANCER_POOR_SURVIVAL_UP | SMID_BREAST_CANCER_LUMINAL_A_DNNAKAYAMA_SOFT_TISSUE_TUMORS_PCA2_UP | REACTOME_LATENT_INFECTION_OF_HOMO_SAPIENS...REACTOME_TRANSFERRIN_ENDOCYTOSIS_AND_RECYCLING | FARMER_BREAST_CANCER_CLUSTER_2KEGG_EPITHELIAL_CELL_SIGNALING_... | LANDIS_BREAST_CANCER_PROGRESSION_UP ** Contrast BasalvsML **LIM_MAMMARY_STEM_CELL_DN | LIM_MAMMARY_STEM_CELL_UPLIM_MAMMARY_LUMINAL_MATURE_DN | PAPASPYRIDONOS_UNSTABLE_ATEROSCLEROTIC_PLAQUE_DNNAKAYAMA_SOFT_TISSUE_TUMORS_PCA2_UP | LIM_MAMMARY_LUMINAL_MATURE_UPCHARAFE_BREAST_CANCER_LUMINAL_VS_MESENCHYMAL_UP | RICKMAN_HEAD_AND_NECK_CANCER_AYAGUE_PRETUMOR_DRUG_RESISTANCE_DN | BERTUCCI_MEDULLARY_VS_DUCTAL_BREAST_CANCER_DN ** Contrast LPvsML **LIM_MAMMARY_LUMINAL_MATURE_UP | LIM_MAMMARY_LUMINAL_MATURE_DNPHONG_TNF_RESPONSE_VIA_P38_PARTIAL | WOTTON_RUNX_TARGETS_UPWANG_MLL_TARGETS | PHONG_TNF_TARGETS_DNREACTOME_PEPTIDE_LIGAND_BINDING_RECEPTORS | CHIANG_LIVER_CANCER_SUBCLASS_CTNNB1_DNGERHOLD_RESPONSE_TO_TZD_DN | DURAND_STROMA_S_UP ** Comparison analysis **LIM_MAMMARY_LUMINAL_MATURE_DN | LIM_MAMMARY_STEM_CELL_DNNAKAYAMA_SOFT_TISSUE_TUMORS_PCA2_UP | LIM_MAMMARY_LUMINAL_MATURE_UPCOLDREN_GEFITINIB_RESISTANCE_DN | LIM_MAMMARY_STEM_CELL_UPCHARAFE_BREAST_CANCER_LUMINAL_VS_MESENCHYMAL_UP | LIM_MAMMARY_LUMINAL_PROGENITOR_UPBERTUCCI_MEDULLARY_VS_DUCTAL_BREAST_CANCER_DN | MIKKELSEN_IPS_WITH_HCP_H3K27ME3 **** Top 10 gene sets in the c5 GO Gene Sets (BP) collection ****** Contrast BasalvsLP **GO_SYNAPSE_ORGANIZATION | GO_IRON_ION_TRANSPORTGO_CALCIUM_INDEPENDENT_CELL_CELL_ADHESION_VIA_PLASMA_MEMBRANE_CELL_ADHESION_MOLECULES | GO_PH_REDUCTIONGO_HOMOPHILIC_CELL_ADHESION_VIA_PLASMA_MEMBRANE_ADHESION_MOLECULES | GO_VACUOLAR_ACIDIFICATIONGO_FERRIC_IRON_TRANSPORT | GO_TRIVALENT_INORGANIC_CATION_TRANSPORTGO_NEURON_PROJECTION_GUIDANCE | GO_MESONEPHROS_DEVELOPMENT ** Contrast BasalvsML **GO_FERRIC_IRON_TRANSPORT | GO_TRIVALENT_INORGANIC_CATION_TRANSPORTGO_IRON_ION_TRANSPORT | GO_NEURON_PROJECTION_GUIDANCEGO_GLIAL_CELL_MIGRATION | GO_SPINAL_CORD_DEVELOPMENTGO_REGULATION_OF_SYNAPSE_ORGANIZATION | GO_ACTION_POTENTIALGO_MESONEPHROS_DEVELOPMENT | GO_NEGATIVE_REGULATION_OF_SMOOTH_MUSCLE_CELL_MIGRATION ** Contrast LPvsML **GO_NEGATIVE_REGULATION_OF_NECROTIC_CELL_DEATH | GO_PARTURITIONGO_RESPONSE_TO_VITAMIN_D | GO_GPI_ANCHOR_METABOLIC_PROCESSGO_REGULATION_OF_BLOOD_PRESSURE | GO_DETECTION_OF_MOLECULE_OF_BACTERIAL_ORIGINGO_CELL_SUBSTRATE_ADHESION | GO_PROTEIN_TRANSPORT_ALONG_MICROTUBULEGO_INTRACILIARY_TRANSPORT | GO_CELLULAR_RESPONSE_TO_VITAMIN ** Comparison analysis **GO_IRON_ION_TRANSPORT | GO_FERRIC_IRON_TRANSPORTGO_TRIVALENT_INORGANIC_CATION_TRANSPORT | GO_NEURON_PROJECTION_GUIDANCEGO_MESONEPHROS_DEVELOPMENT | GO_SYNAPSE_ORGANIZATIONGO_REGULATION_OF_SYNAPSE_ORGANIZATION | GO_MEMBRANE_DEPOLARIZATION_DURING_CARDIAC_MUSCLE_CELL_ACTION_POTENTIALGO_HOMOPHILIC_CELL_ADHESION_VIA_PLASMA_MEMBRANE_ADHESION_MOLECULES | GO_NEGATIVE_REGULATION_OF_SMOOTH_MUSCLE_CELL_MIGRATION **** Top 10 gene sets in the c5 GO Gene Sets (CC) collection ****** Contrast BasalvsLP **GO_PROTON_TRANSPORTING_V_TYPE_ATPASE_COMPLEX | GO_VACUOLAR_PROTON_TRANSPORTING_V_TYPE_ATPASE_COMPLEXGO_MICROTUBULE_END | GO_MICROTUBULE_PLUS_ENDGO_ACTIN_FILAMENT_BUNDLE | GO_CELL_CELL_ADHERENS_JUNCTIONGO_NEUROMUSCULAR_JUNCTION | GO_AP_TYPE_MEMBRANE_COAT_ADAPTOR_COMPLEXGO_INTERMEDIATE_FILAMENT | GO_CONDENSED_NUCLEAR_CHROMOSOME_CENTROMERIC_REGION ** Contrast BasalvsML **GO_FILOPODIUM_MEMBRANE | GO_LATE_ENDOSOME_MEMBRANEGO_PROTON_TRANSPORTING_V_TYPE_ATPASE_COMPLEX | GO_NEUROMUSCULAR_JUNCTIONGO_COATED_MEMBRANE | GO_ACTIN_FILAMENT_BUNDLEGO_CLATHRIN_COAT | GO_AP_TYPE_MEMBRANE_COAT_ADAPTOR_COMPLEXGO_CLATHRIN_ADAPTOR_COMPLEX | GO_CONTRACTILE_FIBER ** Contrast LPvsML **GO_CILIARY_TRANSITION_ZONE | GO_TCTN_B9D_COMPLEXGO_NUCLEAR_NUCLEOSOME | GO_INTRINSIC_COMPONENT_OF_ORGANELLE_MEMBRANEGO_ENDOPLASMIC_RETICULUM_QUALITY_CONTROL_COMPARTMENT | GO_KERATIN_FILAMENTGO_PROTEASOME_COMPLEX | GO_CILIARY_BASAL_BODYGO_PROTEASOME_CORE_COMPLEX | GO_CORNIFIED_ENVELOPE ** Comparison analysis **GO_PROTON_TRANSPORTING_V_TYPE_ATPASE_COMPLEX | GO_ACTIN_FILAMENT_BUNDLEGO_NEUROMUSCULAR_JUNCTION | GO_AP_TYPE_MEMBRANE_COAT_ADAPTOR_COMPLEXGO_CONTRACTILE_FIBER | GO_INTERMEDIATE_FILAMENTGO_LATE_ENDOSOME_MEMBRANE | GO_CLATHRIN_VESICLE_COATGO_ENDOPLASMIC_RETICULUM_QUALITY_CONTROL_COMPARTMENT | GO_MICROTUBULE_END **** Top 10 gene sets in the c5 GO Gene Sets (MF) collection ****** Contrast BasalvsLP **GO_HYDROGEN_EXPORTING_ATPASE_ACTIVITY | GO_SIGNALING_PATTERN_RECOGNITION_RECEPTOR_ACTIVITYGO_LIPID_TRANSPORTER_ACTIVITY | GO_TRIGLYCERIDE_LIPASE_ACTIVITYGO_AMINE_BINDING | GO_STRUCTURAL_CONSTITUENT_OF_MUSCLEGO_NEUROPEPTIDE_RECEPTOR_ACTIVITY | GO_WIDE_PORE_CHANNEL_ACTIVITYGO_CATION_TRANSPORTING_ATPASE_ACTIVITY | GO_LIPASE_ACTIVITY ** Contrast BasalvsML **GO_G_PROTEIN_COUPLED_RECEPTOR_ACTIVITY | GO_TRANSMEMBRANE_RECEPTOR_PROTEIN_KINASE_ACTIVITYGO_STRUCTURAL_CONSTITUENT_OF_MUSCLE | GO_VOLTAGE_GATED_SODIUM_CHANNEL_ACTIVITYGO_CORECEPTOR_ACTIVITY | GO_TRANSMEMBRANE_RECEPTOR_PROTEIN_TYROSINE_KINASE_ACTIVITYGO_LIPID_TRANSPORTER_ACTIVITY | GO_SULFOTRANSFERASE_ACTIVITYGO_CATION_TRANSPORTING_ATPASE_ACTIVITY | GO_PEPTIDE_RECEPTOR_ACTIVITY ** Contrast LPvsML **GO_MANNOSE_BINDING | GO_PHOSPHORIC_DIESTER_HYDROLASE_ACTIVITYGO_BETA_1_3_GALACTOSYLTRANSFERASE_ACTIVITY | GO_COMPLEMENT_BINDINGGO_ALDEHYDE_DEHYDROGENASE_NAD_ACTIVITY | GO_MANNOSIDASE_ACTIVITYGO_LIGASE_ACTIVITY_FORMING_CARBON_NITROGEN_BONDS | GO_CARBOHYDRATE_PHOSPHATASE_ACTIVITYGO_LIPASE_ACTIVITY | GO_PEPTIDE_RECEPTOR_ACTIVITY ** Comparison analysis **GO_STRUCTURAL_CONSTITUENT_OF_MUSCLE | GO_LIPID_TRANSPORTER_ACTIVITYGO_CATION_TRANSPORTING_ATPASE_ACTIVITY | GO_CHEMOREPELLENT_ACTIVITYGO_HEPARAN_SULFATE_PROTEOGLYCAN_BINDING | GO_TRANSMEMBRANE_RECEPTOR_PROTEIN_TYROSINE_KINASE_ACTIVITYGO_LIPASE_ACTIVITY | GO_PEPTIDE_RECEPTOR_ACTIVITYGO_CORECEPTOR_ACTIVITY | GO_TRANSMEMBRANE_RECEPTOR_PROTEIN_KINASE_ACTIVITY **** Top 10 gene sets in the KEGG Pathways collection ****** Contrast BasalvsLP **Collecting duct acid secretion | alpha-Linolenic acid metabolismSynaptic vesicle cycle | Hepatitis CVascular smooth muscle contraction | Rheumatoid arthritiscGMP-PKG signaling pathway | Axon guidanceProgesterone-mediated oocyte maturation | Arrhythmogenic right ventricular cardiomyopathy (ARVC) ** Contrast BasalvsML **Collecting duct acid secretion | Synaptic vesicle cycleOther glycan degradation | Axon guidanceArrhythmogenic right ventricular cardiomyopathy (ARVC) | Glycerophospholipid metabolismLysosome | Vascular smooth muscle contractionProtein digestion and absorption | Oxytocin signaling pathway ** Contrast LPvsML **Glycosylphosphatidylinositol(GPI)-anchor biosynthesis | Histidine metabolismDrug metabolism - cytochrome P450 | PI3K-Akt signaling pathwayProteasome | Sulfur metabolismRenin-angiotensin system | Nitrogen metabolismTyrosine metabolism | Systemic lupus erythematosus ** Comparison analysis **Collecting duct acid secretion | Synaptic vesicle cycleVascular smooth muscle contraction | Axon guidanceArrhythmogenic right ventricular cardiomyopathy (ARVC) | Oxytocin signaling pathwayLysosome | Adrenergic signaling in cardiomyocytesLinoleic acid metabolism | cGMP-PKG signaling pathway comparative analysis allows researchers to estimate the significance of a gene set across multiple experimental contrasts. This analysis helps in the identification of biological processes that are perturbed in multiple experimental conditions simultaneously. This experiment is the RNA-seq equivalent of Limet al. (2010)22, who used Illumina microarrays to study the same cell populations (see later), so it is reassuring to observe theLIM gene signatures derived from this experiment amongst the top ranked c2 gene signatures in both the individual contrasts and comparative results.EGSEA\u2019sN sets in each collection and contrast using the methodtopSets. For example, the top 10 gene sets in the c2 collection for the comparative analysis can be retrieved as follows:Another way of exploring the EGSEA results is to retrieve the top ranked> topSets Extracting the top gene sets of the collection c2 Curated Gene Sets for the contrast comparison Sorted by med.rank [1] \"LIM_MAMMARY_LUMINAL_MATURE_DN\" [2] \"LIM_MAMMARY_STEM_CELL_DN\" [3] \"NAKAYAMA_SOFT_TISSUE_TUMORS_PCA2_UP\" [4] \"LIM_MAMMARY_LUMINAL_MATURE_UP\" [5] \"COLDREN_GEFITINIB_RESISTANCE_DN\" [6] \"LIM_MAMMARY_STEM_CELL_UP\" [7] \"CHARAFE_BREAST_CANCER_LUMINAL_VS_MESENCHYMAL_UP\" [8] \"LIM_MAMMARY_LUMINAL_PROGENITOR_UP\" [9] \"BERTUCCI_MEDULLARY_VS_DUCTAL_BREAST_CANCER_DN\" [10] \"MIKKELSEN_IPS_WITH_HCP_H3K27ME3\" med.rank as selected whenegsea was invoked above. When the argumentnames.only is set toFALSE, additional information is displayed for each gene set including gene set annotation, the EGSEA scores and the individual rankings by each base method. As expected, gene sets retrieved by EGSEA included theLIM gene sets22 that were derived from microarray profiles of analagous mammary cell populations as well as those derived from populations with similar origin (sets 7 and 9) and behaviour or characteristics (sets 5 and 10).The gene sets are ordered based on theirtopSets can be used to search for gene sets of interest based on different EGSEA scores as well as the rankings of individual methods. For example, the ranking of the sixLIM gene sets from the c2 collection can be displayed based on themed.rank as follows:Next,> t = topSets> t p.adj Rank med.rank vote.rankLIM_MAMMARY_LUMINAL_MATURE_DN 1.646053e-29 1 36 5LIM_MAMMARY_STEM_CELL_DN 6.082053e-43 2 37 5LIM_MAMMARY_LUMINAL_MATURE_UP 2.469061e-22 4 92 5LIM_MAMMARY_STEM_CELL_UP 3.154132e-103 6 134 5LIM_MAMMARY_LUMINAL_PROGENITOR_UP 3.871536e-30 8 180 5LIM_MAMMARY_LUMINAL_PROGENITOR_DN 2.033005e-06 178 636 115 LIM gene sets are ranked in the top 10 by EGSEA, the values shown in the median rank (med.rank) column indicate that individual methods can assign much lower ranks to these sets. EGSEA\u2019s prioritisation of these gene sets demonstrates the benefit of an ensemble approach.While five of theSimilarly, we can find the top 10 pathways in the KEGG collection from the ensemble analysis for the Basal versus LP contrast and the comparative analysis as follows:> topSets Extracting the top gene sets of the collection KEGG Pathways for the contrast BasalvsLP Sorted by med.rank [1] \"Collecting duct acid secretion\" \"alpha-Linolenic acid metabolism\" [3] \"Synaptic vesicle cycle\" \"Hepatitis C\" [5] \"Vascular smooth muscle contraction\" \"Rheumatoid arthritis\" [7] \"cGMP-PKG signaling pathway\" \"Axon guidance\" [9] \"Progesterone-mediated oocyte maturation\" \"Arrhythmogenic right ventricular cardiomyopathy (ARVC)\" > topSets Extracting the top gene sets of the collection KEGG Pathways for the contrast comparison Sorted by med.rank [1] \"Collecting duct acid secretion\" \"Synaptic vesicle cycle\" [3] \"Vascular smooth muscle contraction\" \"Axon guidance\" [5] \"Arrhythmogenic right ventricular cardiomyopathy (ARVC)\" \"Oxytocin signaling pathway\" [7] \"Lysosome\" \"Adrenergic signaling in cardiomyocytes\" [9] \"Linoleic acid metabolism\" \"cGMP-PKG signaling pathway\" Vascular smooth muscle contraction andOxytocin signalling pathway) and milk production and secretion from luminal lineage cells .EGSEA highlights many pathways with known importance in the mammary gland such as those associated with distinct roles in lactation like basal cell contraction Generating heatmap for LIM_MAMMARY_STEM_CELL_UP from the collectionc2 Curated Gene Sets and for the contrast comparison> plotHeatmapGenerating heatmap for LIM_MAMMARY_STEM_CELL_DN from the collectionc2 Curated Gene Sets and for the contrast comparison plotHeatmap, thegene.set value must match the name returned from thetopSets method. The rows of the heatmap represent the genes in the set and the columns represent the experimental contrasts. The heatmap colour-scale ranges from down-regulated (blue) to up-regulated (red) while the row labels (Gene symbols) are coloured in green when the genes are statistically significant in the DE analysis (i.e. FDR\u2264 0.05 in at least one contrast). Heatmaps can be generated for individual comparisons by changing thecontrast argument ofplotHeatmap. TheplotHeatmap method also generates a CSV file that includes the DE analysis results fromlimma::topTable for all expressed genes in the selected gene set and for each contrast (in the case ofcontrast = \"comparison\"). This file can be used to create customised plots using other R/Bioconductor packages.When usingplotPathway method which uses functionality from thepathview package36. For example, the third KEGG signalling pathway retrieved for the contrastBasalvsLP isVascular smooth muscle contraction and can be visualized as follows:In addition to heatmaps, pathway maps can be generated for the KEGG gene sets using the> plotPathwayGenerating pathway map for Vascular smooth muscle contraction from the collectionKEGG Pathways and for the contrast BasalvsLP limma DE analysis Generating pathway map for Vascular smooth muscle contraction from the collectionKEGG Pathways and for the contrast comparison The comparative pathway map shows the log-fold-changes for each gene in each contrast by dividing the gene nodes on the map into multiple columns, one for each contrast .Visualizing results at the experiment level. SinceEGSEA combines the results from multiple gene set testing methods, it can be interesting to compare how different base methods rank a given gene set collection for a selected contrast. TheplotMethods command generates a multi-dimensional scaling (MDS) plot for the ranking of gene sets across all the base methods used (self-contained versuscompetitive).ds used . Methods> plotMethodsGenerating MDS plot for the collectionc2 Curated Gene Sets and for the contrast BasalvsLP> plotMethodsGenerating MDS plot for the collectionc5BP GO Gene Sets and for the contrast BasalvsLP plotSummary method.The significance of each gene set in a given collection for a selected contrast can be visualized using EGSEA\u2019s> plotSummaryGenerating Summary plots for the collectionKEGG Pathways and for the contrast LPvsML 10 (X-axis) and the average absolute log fold-change of the set genes (Y-axis). The sets that appear towards the top-right corner of this plot are most likely to be biologically relevant. EGSEA generates two types of summary plots: the directional summary plot (sort.by argument). The bubble size is based on the EGSEAsignificance score in the former plot and the gene set size in the latter. For example, the summary plots of the KEGG pathways for the LP vs ML contrast show few significant pathways Generating Summary plots for the collectionc2 Curated Gene Sets and for the contrast LPvsML x.cutoff can be used to focus in on the significant gene sets rather than plotting the entire gene set collection, for example Generating Summary plots for the collectionc2 Curated Gene Sets and for the contrast LPvsML plotSummary method as follows:Comparative summary plots can be also generated to compare the significance of gene sets between two contrasts, for example, the comparison between Basal vs LP and Basal vs ML shows th> plotSummary,+ file.name = \"summary_kegg_1vs2\")Generating Summary plots for the collectionKEGG Pathways and for the comparison BasalvsLP vs BasalvsML plotSummary method has two useful parameters: (i)use.names that can be used to display gene set names instead of gene set IDs and (ii)interactive that can be used to generate an interactive version of this plot.TheEGSEA utilizes functionality from thetopGO package37 to generate GO graphs for the significant biological processes (BPs), cellular compartments (CCs) and molecular functions (MFs). TheplotGOGraph method can generate such a display (The c5 collection of MSigDB and the Gene Ontology collection of GeneSetDB contain Gene Ontology (GO) terms. These collections are meant to be non-redundant, containing only a small subset of the entire GO and visualizing how these terms are related to each other can be informative.display as follo> plotGOGraphGenerating GO Graphs for the collection c5 GO Gene Sets (BP) and for the contrast BasalvsLP based on the med.rank> plotGOGraphGenerating GO Graphs for the collection c5 GO Gene Sets (CC) and for the contrast BasalvsLP based on the med.rank sort.by, which in this instance was taken asmed.rank by default since this was selected when EGSEA was invoked. The top five most significant GO terms are highlighted by default in each GO category . More terms can be displayed by changing the value of the parameternoSig. However, this might generate very complicated and unresolved graphs. The colour of the nodes varies between red (most significant) and yellow (least significant). The values of thesort.by scoring function are scaled between 0 and 1 to generate these graphs.The GO graphs are coloured based on the values of the argumentbar plot. The methodplotBars can be used to generate a bar plot for the topN gene sets in an individual collection for a particular contrast or from a comparative analysis across multiple contrasts. For example, the top 20 gene sets of the comparative analysis carried out on the c2 collection of MSigDB can be visualized in abar plot Generating a bar plot for the collection c2 Curated Gene Sets and the contrast comparison 10(p.ad j) values are plotted for the top 20 gene sets selected and ordered based on thesort.by parameter. The parametersbar.vals,number andsort.by ofplotBars can be changed to customize thebar plot.The colour of the bars is based on the regulation direction of the gene sets, i.e., red for up-regulated, blue for down-regulated and purple for neutral regulation . By default, the \u2212 logsummary heatmap can be a useful visualization. The methodplotSummaryHeatmaps generates a heatmap of the topN gene sets in the comparative analysis across all experimental conditions Generating summary heatmap for the collection c2 Curated Gene Setssort.by: med.rank, hm.vals: avg.logfc.dir, show.vals:> plotSummaryHeatmapGenerating summary heatmap for the collection KEGG Pathwayssort.by: med.rank, hm.vals: avg.logfc.dir, show.vals: EGSEAResults object using thelimmaTopTable method.We find the heatmap view at both the gene set and summary level and the summary level bar plots to be useful summaries to include in publications to highlight the gene set testing results. The top differentially expressed genes from each contrast can be accessed from the> t = limmaTopTable> head(t) ENTREZID SYMBOL CHR logFC AveExpr t P.Value adj.P.Val B19253 19253 Ptpn18 1 -5.63 4.13 -34.5 5.87e-10 9.62e-07 13.216324 16324 Inhbb 1 -4.79 6.46 -33.2 7.99e-10 9.62e-07 13.353624 53624 Cldn7 11 -5.51 6.30 -40.2 1.75e-10 9.62e-07 14.5218518 218518 Marveld2 13 -5.14 4.93 -34.8 5.56e-10 9.62e-07 13.512759 12759 Clu 14 -5.44 8.86 -41.0 1.52e-10 9.62e-07 14.770350 70350 Basp1 15 -6.07 5.25 -34.3 6.22e-10 9.62e-07 13.3 Creating an HTML report of the results. To generate an EGSEA HTML report for this dataset, you can either setreport=TRUE when you invokeegsea or use the S4 methodgenerateReport as follows:> generateReportEGSEA HTML report is being generated ... http://bioinf.wehi.edu.au/EGSEA/mam-rnaseq-egsea-report/index.html (DT package (https://CRAN.R-project.org/package=DT) and summary plots fromplotly (https://CRAN.R-project.org/package=plotly) are integrated into the report usinghtmlwidgets (https://CRAN.R-project.org/package=htmlwidgets) and can be added by settinginteractive = TRUE in the command above. This option significantly increases both the run time and size of the final report due to the large number of gene sets in most collections.The EGSEA report generated for this dataset is available online atex.html . The HTMThis example completes our overview of EGSEA\u2019s gene set testing and plotting capabilities for RNA-seq data. Readers can refer to the EGSEA vignette or individual help pages for further details on each of the above methods and classes.et al. (2010)22 and is the microarray equivalent of the RNA-seq data analysed above. Support for microarray data is a new feature in EGSEA, and in this example, we show an express route for analysis according to the steps shown inThe second dataset analysed in this workflow comes from Limhttp://bioinf.wehi.edu.au/EGSEA/arraydata.zip into the current working directory. Illumina BeadArray data can be read in directly using thereadIDAT andreadBGX functions from theilluminaio package38. However, a more convenient way is via theread.idat function inlimma which uses theseilluminaio functions and outputs the data as anEListRaw object for further processing.To analyse this dataset, we begin by unzipping the files downloaded from> library(limma)> targets = read.delim> data = read.idat(as.character(targets$File),+ bgxfile=\"GPL6887_MouseWG-6_V2_0_R0_11278593_A.bgx\",+ annotation=c)Reading manifest file GPL6887_MouseWG-6_V2_0_R0_11278593_A.bgx ... Done 4481850214_B_Grn.idat ... Done 4481850214_C_Grn.idat ... Done 4481850214_D_Grn.idat ... Done 4481850214_F_Grn.idat ... Done 4481850187_A_Grn.idat ... Done 4481850187_B_Grn.idat ... Done 4481850187_D_Grn.idat ... Done 4481850187_E_Grn.idat ... Done 4481850187_F_Grn.idat ... Done 4466975058_A_Grn.idat ... Done 4466975058_B_Grn.idat ... Done 4466975058_C_Grn.idat ... Done 4466975058_D_Grn.idat ... Done 4466975058_E_Grn.idat ... Done 4466975058_F_Grn.idat ... DoneFinished reading data.> data$other$Detection = detectionPValues(data)> data$targets = targets> colnames(data) = targets$Sample neqc function inlimma is used to carry outnormexp background correction and quantile normalisation on the raw intensity values using negative control probes39. This is followed by log2-transformation of the normalised intensity values and removal of the control probes.Next the> data = neqc(data) p-value of less than 0.05 in at least 5 samples (the number of samples within each group). We next remove genes without a valid Entrez ID and in cases where there are multiple probes targeting different isoforms of the same gene, select the probe with highest average expression as the representative one to use in the EGSEA analysis. This leaves 7,123 probes for further analysis.We then filter out probes that are consistently non-expressed or lowly expressed throughout all samples as they are uninformative in downstream analysis. Our threshold for expression requires probes to have a detection> table(targets$Celltype)Basal LP ML 5 5 5> keep.exprs = rowSums(data$other$Detection<0.05)>=5> table(keep.exprs)keep.exprsFALSE TRUE23638 21643> data = data> dim(data)[1] 21643 15> head(data$genes) Probe_Id Array_Address_Id Entrez_Gene_ID Symbol Chromosome3 ILMN_1219601 2030280 C920011N12Rik4 ILMN_1252621 1980164 101142 2700050P07Rik 66 ILMN_3162407 6220026 Zfp367 ILMN_2514723 2030072 1110067B18Rik8 ILMN_2692952 6040743 329831 4833436C18Rik 49 ILMN_1257952 7160091 B930060K05Rik> sum(is.na(data$genes$Entrez_Gene_ID))[1] 11535> data1 = data> dim(data1)[1] 10108 15> ord = order(lmFit(data1)$Amean, decreasing=TRUE)> ids2keep = data1$genes$Array_Address_Id[ord][!duplicated(data1$genes$Entrez_Gene_ID[ord])]> data1 = data1> dim(data1)[1] 7123 15> expr = data1$E> group = as.factor(data1$targets$Celltype)> probe.annot = data1$genes> head(probe.annot)> head(probe.annot) Array_Address_Id Entrez_Gene_ID Symbol39513 4120224 20102 Rps4x9062 2260576 22143 Tuba1b15308 5720202 12192 Zfp36l139894 1470600 11947 Atp5b24709 2710477 20088 Rps249872 1580471 228033 Atp5g3 29 and contrasts matrix to look for differences between the Basal and LP, Basal and ML and LP and ML populations. A batch term is included in the linear model to account for differences in expression that are attributable to the day the experiment was run.As before, we need to set up an appropriate linear model> head(data1$targets) File Sample Celltype Time Experiment2-2 4481850214_B_Grn.idat 2-2 ML At1 13-3 4481850214_C_Grn.idat 3-3 LP At1 14-4 4481850214_D_Grn.idat 4-4 Basal At1 16-7 4481850214_F_Grn.idat 6-7 ML At2 17-8 4481850187_A_Grn.idat 7-8 LP At2 18-9 4481850187_B_Grn.idat 8-9 Basal At2 1> experiment = as.character(data1$targets$Experiment)> design = model.matrix(~0 + group + experiment)> colnames(design) = gsub)> design Basal LP ML experiment21 0 0 1 02 0 1 0 03 1 0 0 04 0 0 1 05 0 1 0 06 1 0 0 07 0 0 1 08 0 1 0 09 1 0 0 010 0 0 1 111 0 1 0 112 1 0 0 113 1 0 0 114 0 0 1 115 0 1 0 1attr[1] 1 1 1 2attrattr$group[1] \"contr.treatment\" attr$experiment[1] \"contr.treatment\" > contr.matrix = makeContrasts)> contr.matrix ContrastsLevels BasalvsLP BasalvsML LPvsML Basal 1 1 0 LP -1 0 1 ML 0 -1 -1 experiment2 0 0 0 EGSEAdata package and build indexes based on Entrez IDs that link between the genes in each signature and the rows of our expression matrix.We next extract the mouse c2, c5 and KEGG gene signature collections from the> library(EGSEA)> library(EGSEAdata)> gs.annots = buildIdx, go.part = TRUE)[1] \"Loading MSigDB Gene Sets ... \"[1] \"Loaded gene sets for the collection c2 ...\"[1] \"Indexed the collection c2 ...\"[1] \"Created annotation for the collection c2 ...\"[1] \"Loaded gene sets for the collection c5 ...\"[1] \"Indexed the collection c5 ...\"[1] \"Created annotation for the collection c5 ...\"MSigDB c5 gene set collection has been partitioned intoc5BP, c5CC, c5MF[1] \"Building KEGG pathways annotation object ... \"> names(gs.annots)[1] \"c2\" \"c5BP\" \"c5CC\" \"c5MF\" \"kegg\" egsea.ma. Gene sets were again prioritised by their median rank across the 11 methods.The same 11 base methods used previously in the RNA-seq analysis were selected for the ensemble testing of the microarray data using the function> baseMethods = egsea.base[-2]> baseMethods [1] \"camera\" \"safe\" \"gage\" \"padog\" \"plage\" \"zscore\" [7] \"gsva\" \"ssgsea\" \"globaltest\" \"ora\" \"fry\">> gsam = egsea.maEGSEA analysis has started##------ Tue Jun 20 14:27:32 2017 ------##Log fold changes are estimated using limma package ...limma DE analysis is carried out ...Number of used cores has changed to 3in order to avoid CPU overloading.EGSEA is running on the provided data and c2 collectionEGSEA is running on the provided data and c5BP collectionEGSEA is running on the provided data and c5CC collectionEGSEA is running on the provided data and c5MF collectionEGSEA is running on the provided data and kegg collection##------ Tue Jun 20 14:33:37 2017 ------##EGSEA analysis took 365.359 seconds.EGSEA analysis has completed generateReport function. We complete our analysis by displaying the top ranked sets for the c2 collection from a comparative analysis across all contrasts.An HTML report that includes each of the gene set level and summary level plots shown individually for the RNA-seq analysis was then created using the> generateReportEGSEA HTML report is being generated ...> topSets Sorted by med.rank [1] \"LIM_MAMMARY_STEM_CELL_UP\" [2] \"LIM_MAMMARY_LUMINAL_MATURE_DN\" [3] \"LIM_MAMMARY_STEM_CELL_DN\" [4] \"CHARAFE_BREAST_CANCER_LUMINAL_VS_MESENCHYMAL_DN\" [5] \"LIU_PROSTATE_CANCER_DN\" http://bioinf.wehi.edu.au/EGSEA/mam-ma-egsea-report/index.html. Reanalysis of this data retrieves similar c2 gene sets to those identified by analysis of RNA-seq data. These included theLIM gene signatures as well as those derived from populations with similar cellular origin (set 4).The EGSEA report generated for this dataset is available online atEGSEA package to combine the results obtained from different gene signature databases across multiple GSE methods to find an ensemble solution. A key benefit of an EGSEA analysis is the detailed and comprehensive HTML report that can be shared with collaborators to help them interpret their data. This report includes tables prioritising gene signatures according to the user specified analysis options, and both gene set specific and summary graphics, each of which can be generated individually using specific R commands. The approach taken by EGSEA is facilitated by the diverse range of gene set testing algorithms and plotting capabilities available within Bioconductor. EGSEA has been tailored to suit a limma-based differential expression analysis which continues to be a very popular and flexible platform for transcriptomic data. Analysts who choose an individual GSE algorithm to prioritise their results rather than an ensemble solution can still benefit from EGSEA\u2019s comprehensive reporting capability.In this workflow article, we have demonstrated how to use theEGSEA123 workflow package available from Bioconductor:https://www.bioconductor.org/help/workflows/EGSEA123.Code to perform this analysis can be found in thehttps://github.com/mritchie/EGSEA123.Latest source code is available at:https://doi.org/10.5281/zenodo.104343640.Archived source code as at the time of publication is available at:Software license: Artistic License 2.0. GSEA analysis methods do not all produce same results. Score-based gene set analysis methods like the Broad Institute GSEA tool are considered to perform better than normal Fisher\u2019s exact test . But analysts often use methods they know to be less than ideal in order to reduce complexity and save time. So it is good to have a unified interface for GSEA analyses with R \u2013 it helps save programming time and reduces complexity. In addition EGSEA is a unique method that combines up to 12 gene set analysis methods into a single score. Independent test also corroborate that the tool using the 12 has more specificity and good sensitivity compared to using some of the tests alone. The EGSEA 1-2-3 workflow is easy to use and generate good-quality figures with the ggplot2 R package. So9me of the figures are novel compared to other packages e.g., scatter plots designed to compare different contrasts. It is also very useful that the tool can be applied to multiple contrasts at a time, although if there are too many contrasts then the number of plots becomes unwieldly . Some more technical comments: The results object is very complicated for retrieving individual method analysis results . I quite like the \"biobroom\" Bioconductor package that does \"tidy\" data frames from limma results objects. All in all a very useful package both for automating the running of lots of methods at the same time and of course for the \"ensemble\" method. \u00a0It is recommended to be considered to be part of a standard bioinformatics workflow.We have read this submission. We believe that we have an appropriate level of expertise to confirm that it is of an acceptable scientific standard. EGSEA is a new gene set analysis tool that combines results from multiple individual tools in R as to yield better results. The authors have published EGSEA methodology previously. This paper focuses on the practical analysis workflow based on EGSEA with specific examples. As EGSEA is a compound and complicated analysis procedure, this work serves as a valuable guidance for the users to make full use of this tool. I\u2019ve gone through the workflow line by line, it seems to work well. However, authors can improve their work by addressing the following issues.There should be an R code script which includes all source code and concise comments like the one in company with the vignette in any Bioconductor package. It would be much easy for the users/reviewers to try the example code. It is not convenient to follow the code in this manuscript, the code need to be edit to remove the prompt symbols (> or +) at each line when copying/pasting.It takes too long to run the egsea analysis example on modest machine. It is advisable to show a lesser example in the workflow with only one gene set collection like kegg and just a few base methods like: gsa = egseaThe rank of the gsa results shown following the t = topSets(..) line is confusing. The p.adj for the top 1 gene set is not the smallest, actually much bigger than top 2, 6 and 8. Presumably, the gene sets are ranked by med.rank instead of p.adj here. However, the opposite was described in the text above near the egsea.sort line: \u201cAlthough p.adj is the default option for sorting EGSEA results for convenience, ...\u201dIn addition, there is big difference between the final rank and med.rank (e.g. 1 vs 36). This may suggest inconsistent results came from different base methods. This may also be due to the large number of gene sets being tested. Again, using a smaller gene set collection and a few base methods could make the ranking more consistent.All visualization functions, i.e. plotHeatmap, plotPathway, plotGOGraph, plotMethods, plotSummary and plotBars share largely the same set of arguments, they can have a unified wrapper function like plot.gsa with an extra argument type to specify the plot type.Functions plotPathway, plotGOGraph are wrapper functions for those in the pathview and topGO package as the author noted in the text. It would be good to explicit show some message like \u201ccalling plotting function from pathview or topGO package etc\u201d, just like the message when running egsea.http://bioinf.wehi.edu.au/EGSEA/mam-rnaseq-egsea-report/index.html.HTML report of the results is a very valuable feature for the users. However, the code can run a long time, it would be helpful to add some progress reminder message to generateReport function like egsea. BTW, the KEGG Pathway graphs are not shown properly in the report example at \u00a0I have read this submission. I believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard, however I have significant reservations, as outlined above. 1, and this article gives a well-written walk-through of how to use the packages.This F1000 software tool article describes the EGSEA package that incorporates many different gene set testing methods from various packages and also allows access to a wide array of gene sets from different databases through the accompanying EGSEAdata package. These packages will enable researchers to conveniently test many different methods and incorporate their results to get more robust biological insightshttp://www.genome.jp/kegg/catalog/org_list.html) and GO terms are readily available for 19 species using BioC's pre-built OrgDB packages and\u00a0hundreds of other through AnnotationHub. It is unclear whether EGSEA functions buildCustomIdx and buildGMTIdx that were \"written to allow users to run EGSEA on gene set collections that may have been curated within a lab or downloaded from public databases and allow use of gene identifiers other than Entrez IDs\" can be used to run EGSEA on additional species. If so, this should be clearly stated in both the Abstract and in the body of the article, plus an example given on how to use\u00a0buildCustomIdx for another species. If there is some reason that EGSEA cannot currently extend to other species, this should be acknowledged as a limitation and future versions should strive to allow this .\u00a0 The biggest limitation I see it that EGSEA is focused only on\u00a0human and mouse data (and rat? The article does not list rat but the help page for\u00a0buildIdx lists rat as one of the species). I understand that many of the gene set collections like\u00a0MSigDB and\u00a0GeneSetDB are only available for human/mouse, but KEGG currently lists 429 Eukaryotic organisms :\u00a0 EXPR must be a length 1 vectorhttps://support.bioconductor.org/p/103640/#103748)\u00a0\u00a0and got a speedy reply from the author. It hopefully will be resolved soon, although there is a concern of why the error was not found on another Windows machine.\u00a0 However, I reported this error to\u00a0the support site 2 prior to running EGSEA.I am concerned that as demonstrated in this paper, EGSEA seems to take the place of standard limma differential expression analysis, in that the model fitting takes place within the\u00a0egsea function. Certain gene set testing functions do need the individual expression values and not just the fitted values in an MArrayLM object but given the computational time (8 min as shown in the article code block\u00a0and 19 min on my own computer) you should never run egsea without first assessing the model fit on your own!\u00a0Ideally the egsea function could be written to accept MArrayLM, or at least the article should clearly state that users should have first assessed the validity of the model fit through the usual workflow of LawI also wonder why there are different interfaces for voom-based analysis and microarray data given that both use EList objects. I understand that the voom weights need to be used internally, but limma's lmFit function handles both without trouble, although it was originally coded for microarray data and the voom functionality came later. Even if there needs to be a separate function\u00a0egsea.ma for non-voom, non-count data, it should still accept an EList object so that the user does not have to pull out the expression data and\u00a0the grouping info.Back to the computational time required, there are several vague references to removing the roast method \"to save time\" and that the report generation \"significantly\" increases run time. it would be nice to have an example of the time required to run roast and\u00a0the report generation for the\u00a0computational architecture that created the article.\u00a0 Other issues to address before approval:I have read this submission. I believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard, however I have significant reservations, as outlined above. package. \u00a0EGSEA is an ensemble-like method recently published1 by the authors of this workflow that allows the user to simultaneously apply\u00a0different GSEA algorithms on a high-throughput molecular profiling data set, by combining p-values associated with each algorithm using classical meta-analysis approaches such as the Fisher's method.This article describes a gene set enrichment analysis\u00a0(GSEA) workflow for the \"Ensembl of GSEA\"\u00a0(EGSEA) R/Bioconductor software Because the statistical methodology is already described in detail in the corresponding publication, the present software tool article focuses on showing a step-by-step workflow with EGSEA. However, the vignette of the software package already provides a very detailed description about how to use EGSEA through its 39 pages. Therefore, it would be useful for the interested reader to find upfront when he/she should be consulting the vignette and when he/she should be consulting this workflow. Besides this introductory aspects, the following issues should be addressed before approval:The code given in the article breaks, at least in my computer, more concretely, at this line: gsa = egsea EGSEA analysis has started ##------ Mon Nov 27 12:37:42 2017 ------## Log fold changes are estimated using limma package ... limma DE analysis is carried out ... Number of used cores has changed to 4 in order to avoid CPU overloading. EGSEA is running on the provided data and c2 collection .......camera*....safe*...gage*.padog*....gsva*..fry*...plage*...globaltest*...zscore*...ora*...ssgsea* \u00a0 Error in temp.results[[baseGSEA]][[i]] : \u00a0 incorrect number of dimensions while running it with the latest release version 1.6.0. This is strange since the package builds and runs the vignette without problems. So, this might be related to the different sample data sets. A possible hint may come from the fact that the 'buildIdx' call is not returning the expected class of object, according to the workflow: class(gs.annots$s2) ## [1] \"NULL\" summary(gs.annots$s2) ## Length \u00a0Class \u00a0 Mode\u00a0 ## \u00a0 \u00a0 \u00a00 \u00a0 NULL \u00a0 NULL\u00a0The workflow contains a rather high amount of code, often with a non-trivial use of externally instantiated objects and nested calls to functions. It would be helpful for the interested reader to be able to easily copy and paste the instructions, but the fact that R commands are given with the R shell '>' and '+' symbols makes it less easy. A non-expert user may even copy those characters and get an error. I would recommend removing those characters from the illustrated code, just as it happens with the vignette.The workflow assumes that the user has a 'DGEList' object with gene metadata including the mapping between Entrez identifiers' and HGNC symbols. This is a rather unrealistic assumption and I would recommend that the workflow starts building that object from scratch and showing how to build that table of gene metadata. Below I also describe other issues\u00a0that I would recommend to be considered in future versions of the software but which I do not consider them to be required for approval of this article:The so-called \"summary plot\" shows the -log10 p-value on the x-axis and average absolute log fold-change of the set genes on the y-axis. Because this is in a way analogous to a rotated volcano plot, I would suggest to use the same arrangement of axes as in the volcano plot, which is a rather standardized display of significance and magnitude of the effects of interest.package, in which data structures are defined to store and access gene sets and collections of gene sets of different kinds. A salient feature of that infrastructure is the possibility to seamlessly map gene identifiers of different kinds. This would\u00a0simplify and improve the user experience of EGSEA since mapping between genes coded with a particular kind of identifier, and gene sets defined with another kind, is one of the most common tasks in a GSEA-like analysis.One of the key features of the Bioconductor project, to which the EGSEA package is contributing to, is enabling software interoperability through sharing the use of common data structures across different software packages. \u00a0Using specialized data structures, where analogous ones have been already designed by the Bioconductor core team or by a wider community of developers, locks the user into that package and limits the possibilities of using it as a building block in other more complex workflows. I'm making this comment because I have the impression that the EGSEA package would benefit of using the infrastructure provided by the Bioconductor GSEABase \u00a0I have read this submission. I believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard, however I have significant reservations, as outlined above."} +{"text": "Alatina alata as \u201copsins\u201d in Fig.\u00a0After publication of our article , it was A. alata \u201c7 transmembrane receptor (rhodopsin family)\u201d GPCRs revealed only two likely opsins (comp71410 and comp74136) based on the alignment of their conserved lysine (K), for retinal binding, with that of the bovine rhodopsin reference protein. Therefore, all mention of A. alata \u201copsin\u201d expression in the original article should be interpreted as \u201crhodopsin family GPCR\u201d expression, except when referencing the two putative opsins: comp71410 and comp74136 . In light of these new findings, Fig.\u00a0A. alata opsin ORFs. The gene tree reconstructions (.tre file) and corresponding alignment are available at: https://figshare.com/articles/Supplemental_Information_for_A_new_transcriptome_and_transcriptome_profiling_of_adult_and_larval_tissue_in_the_box_jellyfish_Alatina_alata_an_emerging_modelfor_studying_venom_vision_and_sex/3471425.More stringent analysis of the A. alata opsin diversity is less broad than previously suggested does not impact the overall conclusions of our paper regarding opsin expression. Our updated findings still suggest that A. alata opsin is most abundant in the rhopalium, but also expressed in planulae, which have eyes spots, as well as in extra-ocular tissue types in the medusa, suggesting the presence of yet undescribed photoreceptors. Additional non-opsin rhodopsin family GPCRs (n\u2009=\u200931), whose specific identities await further analyses, are expressed in both the A. alata medusa samples and planulae. Previously we touched on the apparent diversity of A. alata rhodopsin family GPCRs, based on the fact that some of our BLAST hits for those sequences corresponded to non-opsin GPCRs , while others corresponded to various opsin types .The discovery that"} +{"text": "Eukaryotic genome assembly remains a challenge in part due to the prevalence of complex DNA repeats. This is a particularly acute problem for holocentric nematodes because of the large number of satellite DNA sequences found throughout their genomes. These have been recalcitrant to most genome sequencing methods. At the same time, many nematodes are parasites and some represent a serious threat to human health. There is a pressing need for better molecular characterization of animal and plant parasitic nematodes. The advent of long-read DNA sequencing methods offers the promise of resolving complex genomes.Nippostrongylus brasiliensis as a test case, applying improved base-calling algorithms and assembly methods, we demonstrate the feasibility of de novo genome assembly matching current community standards using only MinION long reads. In doing so, we uncovered an unexpected diversity of very long and complex DNA sequences repeated throughout the N. brasiliensis genome, including massive tandem repeats of tRNA genes.Using Base-calling and assembly methods have improved sufficiently that de novo genome assembly of large complex genomes is possible using only long reads. The method has the added advantage of preserving haplotypic variants and so has the potential to be used in population analyses.The online version of this article (10.1186/s12915-017-0473-4) contains supplementary material, which is available to authorized users. Necator americanus and Ancylostoma duodenale continue to be a major global health problem. Next-generation sequencing (NGS) techniques open the door to molecular epidemiological monitoring of nematode and helminth parasites in endemic areas. Such studies are, however, hampered by the heterogeneous nature of parasite populations and by the intrinsically complex genome structures of nematodes -based approach using RNA-seq data gave a marginally higher complement of USCO genes, we used this gene set for further analyses. In addition to an elevated proportion of fragmented USCOs (see below), we also noted a high proportion of duplicated USCOs. Inspection revealed that some of these were bona fide lineage-specific expansions. For example, the analysis uncovered three distinct loci encoding isoforms of fructose 1,6-bisphosphatase (PFAM: PF00316), as predicted also from the WTSI assembly. Pairs of USCOs were also found on homologous contigs. There were four such examples in the 12 heterozygous branch subgraphs alone; this presumably reflects haplotypic variants.While generating a complete high-quality annotation was beyond the scope of this study, we made use of expert knowledge regarding carbohydrate-active enzymes (CAZymes) to provide a complementary insight into the predicted genes compared to the WTSI set . Of the N. brasiliensis, attempts are made to maintain genotypic diversity in the population. The assembly that we produced reflects this. Surgical transplantation of single adult parasitic worms has very recently been shown to be feasible, allowing controlled matings and the establishment of inbred lines _??????); do echo ${x}; read_fast5_basecaller.py -t 6 -i ${x} -s called_${x} -o fastq -c r94_450bps_linear.cfg; doneIllumina reads for the WTSI genome assembly were retrieved from the Sequence Read Archive (accession ERR063640). These reads were processed into k-mer counts using Jellyfish v2.2.6 , and thejellyfish count -C --bf-size 20G -t 6 -s 2G -m 21 <(zcat ERR063640_1P.fastq.gz) <(zcat ERR063640_2P.fastq.gz) -o ERR063640_mer_counts.jfjellyfish histo ERR063640_mer_counts.jf > ERR063640_mer_counts.histoUsing the FASTQ files, rather than just reads from the called pass bin, allowed a maximum amount of sequence information to be recovered. Reads were trimmed by 65\u00a0bp at each end to exclude adapters (see below), then Canu v1.5 was run with default parameters. The estimated genome size was 205\u00a0Mb as determined using GenomeScope (qb.cshl.edu/genomescope/) and the WTSI Illumina reads. Long-read sequencing of a heterogeneous population would be expected to generate a longer genome size due to the separation of haplotypes. We, therefore, made a conservative estimate of 300\u00a0Mb. The assumed genome size alters how many reads are selected for the final Canu assembly. If the coverage of corrected reads is greater than 40\u00d7, then only the longest reads are used (up to 40\u00d7 coverage). In our case, the coverage was below 40\u00d7 regardless of what parameters were used, so it would not be expected to alter the outcome. Within Canu, assembly parameters are adjusted when read counts are below 20\u00d7 (e.g. corMinCoverage\u2009=\u20090) as previously described :## trim readspv called_[CFED]_*_albacore_1.1.0.fq.gz | zcat | ~/scripts/fastx-fetch.pl -t 65 | gzip > called_CFED_65bptrim_albacore_1.1.0.fq.gz## run Canu~/install/canu/canu-1.5/Linux-amd64/bin/canu -nanopore-raw called_CFED_65bptrim_albacore_1.1.0.fq.gz -p Nb_ONTCFED_65bpTrim_t1 -d Nb_ONTCFED_65bpTrim_t1 genomeSize=300\u00a0MBowtie2 was used in local mode to map RNA-seq reads to the assembled genome contigs:bowtie2 -p 10 --local -x Nb_ONTCFED_65bpTrim_t1.contigs.fasta -1 ../1563-all_R1_trimmed.fastq.gz -2 ../1563-all_R2_trimmed.fastq.gz | samtools sort > 1563_vs_uncorrected_NOCFED.bamPilon was used to correct based on the RNA-seq mapping to the genome, with structural reassembly disabled (in case it collapsed introns):java -Xmx40G -jar ~/install/pilon/pilon-1.22.jar --genome Nb_ONTCFED_65bpTrim_t1.contigs.fasta --frags 1563_vs_uncorrected_NOCFED.bam --fix snps,indels --output BT2Pilon_NOCFED --gapmargin 1 --mingap 10000000 --threads 10 --changes 2>BT2Pilon_NOCFED.stderr.txt 1>BT2Pilon_NOCFED.stdout.txtContigs that were entirely composed of homopolymer sequences were identified using grep and removed from the assembly:## identify homopolymer (and binary division-rich) regionspv BT2Pilon_NOCFED.fasta | ~/scripts/fastx-hplength.pl > hplength_BT2Pilon_NOCFED.txtpv BT2Pilon_NOCFED.fasta | ~/scripts/fastx-hplength.pl -mode YR > hplength_YR_BT2Pilon_NOCFED.txtpv BT2Pilon_NOCFED.fasta | ~/scripts/fastx-hplength.pl -mode SW > hplength_SW_BT2Pilon_NOCFED.txtpv BT2Pilon_NOCFED.fasta | ~/scripts/fastx-hplength.pl -mode MK > hplength_MK_BT2Pilon_NOCFED.txt## example grep hunt for repeated sequencecat BT2Pilon_NOCFED.fasta | grep -e '^[AT]\\{80\\}' -e '^>' | grep --no-group-separator -B 1 '^[AT]\\{80\\}' | ~/scripts/fastx-length.pl## exclude contigs and sort by length~/scripts/fastx-fetch.pl -v tig00010453 tig00024413 tig00024414 tig00023947 | ~/scripts/fastx-sort.pl -l > Nb_ONTCFED_65bpTrim_t1.contigs.hpcleaned.fastaThis produced the final assembly described in the paper. At this stage, we had an assembled genome, but validation of the genome was difficult. We decided to carry out a draft genome-guided transcriptome assembly with Trinity. To evaluate the completeness of the genome, we focused on expressed genes and used a set of Illumina RNA-seq reads to perform a genome-guided transcriptome assembly using Trinity.The RNA-seq reads were remapped to the corrected assembly for genome-guided Trinity:bowtie2 -p 10 -t --local --score-min G,20,8 -p 10 -x BT2Pilon_NOCFED_hpcleaned.fasta --rf -X 15000 -1../1563-all_R1_trimmed.fastq.gz -2 < 2>bowtie2_1563_vs_BNOCFED_hp.summary.txt | samtools sort > bowtie2_1563_vs_BNOCFED_hp.bam## Trinity assembly; assume introns can be up to 15\u00a0kb in length~/install/trinity/trinityrnaseq-Trinity-v2.4.0/Trinity --CPU 10 --genome_guided_bam bowtie2_1563_vs_BNOCFED_hp.bam --genome_guided_max_intron 15000 --max_memory 40G --SS_lib_type RF --output trinity_BNOCFEDThe assembly that Trinity generated had similar completeness (as measured by BUSCO) to a de novo assembly generated using the same RNA-seq reads (see Table\u00a0https://www.ebi.ac.uk/ena/data/view/ERS1809079), where details of the samples can be found.Our RNA-seq data is publicly available from the European Nucleotide Archive of the European Bioinformatics Institute were usThe RNA-seq reads were mapped to the Trinity-generated transcripts using Salmon:## create Salmon index~/install/salmon/Salmon-0.8.2_linux_x86_64/bin/salmon index -t Trinity-BNOCFED.fasta -i Trinity-BNOCFED.fasta.sai## quantify transcript coverage with Salmon~/install/salmon/Salmon-0.8.2_linux_x86_64/bin/salmon quant -i Trinity-BNOCFED.fasta.sai -1 ../../1563-all_R1_trimmed.fastq.gz -2 ../../1563-all_R2_trimmed.fastq.gz -p 10 -o quant/1563-all_quant -l AThe expression of BUSCO genes was used to set a credible signal cutoff -db missing_busco_list_intersectionBUSCO_nematodes.fasta -outfmt 6 -evalue 1e-3 > BLAST_NOCFED_vs_BUSCO_missing_intersection.tsvHits for the same contig/USCO combination were merged using a custom script, and sequence extracted from the assembly to cover the entire matched region: > BLAST_NOCFED_vs_BUSCO_missing_intersection.fastahttp://onecodex.com). Reads that mapped to any DNA in the OneCodex database were excluded from the read set:DNA extracted from a single adult worm was amplified using a Qiagen Midi RepliG kit. Raw reads and called FASTQ files were obtained from ONT following sequencing. Reads were filtered to exclude those from contaminating DNA using OneCodex ((zcat OneCodex_RefSeq_132394.fastq.gz.results.tsv.gz | awk '{if($3 == 0){print $1}}'; zcat OneCodex_OCD_132394.fastq.gz.results.tsv.gz | awk '{if($3 == 0){print $1}}') | sort | uniq -d | gzip > OCunmapped_names_132394.txt.gzpv 132394.fastq.gz | zcat | ~/scripts/fastx-fetch.pl -i OCunmapped_names_132394.txt.gz | ~/scripts/fastx-fetch.pl -v -i ONTmapped_names_132394.txt.gz | gzip > OCunmapped_ONTunmapped_132394.fastq.gzFiltered reads from the WGA sample were called using Albacore 1.1.0:read_fast5_basecaller.py -o fastq -i A_132394 -t 10 -s called_A_132394 -c r94_450bps_linear.cfgReads with a length of greater than 10\u00a0k were extracted for subsequent analysis:pv called_A_132394_albacore_1.1.0.fq.gz | zcat | ~/scripts/fastx-fetch.pl --min 10000 | gzip > 10k_called_A_132394_albacore_1.1.0.fq.gzTo define the region of raw nanopore sequences for adapter exclusion, the >10\u00a0k reads were mapped to 50\u00a0M reads that had been generated by WTSI and that had been used for the existing WTSI assembly:bowtie2 -p 10 --no-unal --no-mixed --local -x 10k_called_A_132394_albacore_1.1.0.fa -1 <(pv ~/bioinf/MIMR-2017-Jan-01-GBIS/GLG/ONT/aws/Sampled_50M_ERR063640.R1.fq.gz | zcat) -2 ~/bioinf/MIMR-2017-Jan-01-GBIS/GLG/ONT/aws/Sampled_50M_ERR063640.R2.fq.gz | samtools sort > WTSI_Sampled_50M_vs_10k_called_A_132394.bam## find position of first mapped Illumina read for each nanopore readpv WTSI_Sampled_50M_vs_10k_called_A_132394.bam | samtools view - | awk '{print $3,$4}' | sort -k 1,1 -k 2,2n | sort -u -k 1,1 | gzip > firstHit_WTSI_Sampled_50M_vs_10k_called_A_132394.txt.gz## determine position of last mapped Illumina readpv WTSI_Sampled_50M_vs_10k_called_A_132394.bam | samtools view - | awk '{print $3,$4}' | sort -k 1,1 -k 2,2rn | sort -u -k 1,1 | gzip > lastHit_WTSI_Sampled_50M_vs_10k_called_A_132394.txt.gz## count positions of first readszcat firstHit_WTSI_Sampled_50M_vs_10k_called_A_132394.txt.gz | awk '{print $2}' | sort -n | uniq -c | gzip > firstBase_counts_WTSI_Sampled_50M_vs_10k_called_A_132394.txt.gzWhen we mapped the 5' ends of Illumina reads to the start of nanopore reads with Bowtie2, there was a common register shift of 28\u201332 bases, corresponding to the presence of adapter sequences in the nanopore reads. Reads were conservatively trimmed by 65 bases at each end to exclude adapters:pv called_A_132394_albacore_1.1.0.fq.gz | zcat | ~/scripts/fastx-fetch.pl --min 1130 --max 1000000 | \\~/scripts/fastx-fetch.pl -t 65 | gzip > 65bpTrim_called_A_132394_albacore_1.1.0.fq.gzCanu v1.5 was used to assemble the trimmed reads. The assembly was done in stages (with an assembly at each stage) to determine whether or not particular stages were redundant for the assembly:## attempt assembly-only with Canu v1.5~/install/canu/canu-1.5/Linux-amd64/bin/canu -assemble -nanopore-raw 65bpTrim_called_A_132394_albacore_1.1.0.fq.gz -p Nb_ONTA_65bpTrim_t1 -dNb_ONTA_65bpTrim_t1 genomeSize=300\u00a0M## attempt assembly + correction~/install/canu/canu-1.5/Linux-amd64/bin/canu -assemble -nanopore-corrected 65bpTrim_called_A_132394_albacore_1.1.0.fq.gz -p Nb_ONTA_65bpTrim_t2 -d Nb_ONTA_65bpTrim_t2 -correct genomeSize=300\u00a0M## attempt stringent trim with corrected reads~/install/canu/canu-1.5/Linux-amd64/bin/canu -trim-assemble -p Nb_ONTA_65bpTrim_t3 -d Nb_ONTA_65bpTrim_t3 genomeSize=300\u00a0M -nanopore-corrected Nb_ONTA_65bpTrim_t2/Nb_ONTA_65bpTrim_t2.correctedReads.fasta.gz -trim-assemble trimReadsOverlap=500 trimReadsCoverage=5 obtErrorRate=0.25An alternative, less-stringent overlap was also attempted (with trimReadsCoverage = 2), but resulted in a less complete assembly. The results of an analysis surrounding the different assemblies suggested that the default Canu assembly process of correction, trimming, then assembly produced the best outcome, namely the WGA assembly presented in Table\u00a0Canu includes a step of normalization, reducing the shortest reads, if coverage is over a predefined threshold, but that normalization threshold was not triggered for our assembly. We did not attempt a k-mer-based normalization, since this would not resolve the incomplete genome sequence coverage. Selective amplification combined with random sampling by the sequencer means that poorly amplified regions were unlikely to be present in the sequenced reads. We did notice that an assembly combining both WGA and unamplified DNA samples produced a more fragmented genome, which is why a combined assembly is not presented here. It is possible that k-mer normalization of the WGA reads might improve such a hybrid assembly.Additional file 1: Table S1.Detailed comparative sequence analysis and genome assembly statistics. (DOCX 12 kb)Additional file 2: Figure S1.Dot plots of all-against all sequence comparisons between the five most compressible VeCTR regions, based on minimum-distance alignments using LAST-align, created using LAST-dotplot. The names of the corresponding contigs and the coordinates of the plotted regions are indicated for each VeCTR and their unique flanking sequences. The longest of these five VeCTRs corresponds to 147 repeats of tRNA-Trp followed by 114\u00a0bp of non-conserved sequence, while the shortest contains 90 copies of tRNA-Ser with 80\u00a0bp of non-conserved sequence. (PDF 69 kb)Additional file 3: Figure S2.a Venn diagram showing the overlap for USCOs flagged as missing in the analysis of three genome assemblies (Uncorrected (Canu only), Nanopolish and Trinity [genome-guided]; see Table\u00a0b,d) and the sequence alignment, generated using Muscle, with the predicted N. brasiliensis protein sequence extracted from the tblastn alignment . For both N. brasiliensis sequences, note the presence of stop codons (*) in the sequence; these will confound BUSCO analysis. (PDF 398 kb)Potential orthologs of missing USCOs. Additional file 4: Table S2.BLAST hits for missing USCOs (see Excel file). (XLSX 78 kb)Additional file 5: Table S3.CAZy analysis (see Excel file). (XLSX 265 kb)Additional file 6: Figure S3.Distribution of read lengths for whole-genome amplified (WGA) DNA. Comparison of the distribution of read lengths for amplified DNA (WGA) and unamplified DNA extracted by method 1 see Fig.\u00a0."} +{"text": "However, tools for integrating an assembled transcriptome with reference annotation are lacking.RNA sequencing (RNA-seq) analyses can benefit from performing a genome-guided and de novo assembly and combines the resulting transcriptomes with reference genome annotations. Necklace constructs a compact but comprehensive superTranscriptome out of the assembled and reference data. Reads are subsequently aligned and counted in preparation for differential expression testing.Necklace is a software pipeline that runs genome-guided and Necklace allows a comprehensive transcriptome to be built from a combination of assembled and annotated transcripts, which results in a more comprehensive transcriptome for the majority of organisms. In addition RNA-seq data are mapped back to this newly created superTranscript reference to enable differential expression testing with standard methods. Despite the increasing number of species with a sequenced genome, the vast majority of reference genomes are incomplete. They may contain gaps, have unplaced assembly scaffolds, and be poorly annotated. The na\u00efve approach to analyzing RNA sequencing (RNA-seq) on species with a genome would follow the same procedure as for model organisms, i.e., align reads the genome and count reads overlapping annotated genes, then test for differential expression based on gene counts . Howeverde novo assembly y \u2190 DGEList(counts = counts)Genes with a counts per million (cpm) less than or equal to 0.5 in fewer than four samples were filtered out and the libraries normalized.keep \u2190 rowSums(cpm(y) > 0.5) > = 4y \u2190 yy \u2190 calcNormFactors(y)We then estimated the dispersion and looked for differential expression with an FDR <0.05:y \u2190 estimateDispfit \u2190 glmFitqlf \u2190 glmLRTis.de \u2190 decideTestsProject name: NecklaceRRID:SCR_016103Scicrunch https://github.com/Oshlack/necklace/wikiProject home page: Operating systems: LinuxProgramming language: Groovy and C/C++Other requirements: Java 1.8License: GPL 3.0GigaScience GigaDB repository [An archival snapshot of the code is available in the pository .bp: base pair; cpm: counts per million; FDR: false discovery rate; RNA-seq: RNA sequencing.GIGA-D-17-00354.pdfClick here for additional data file.GIGA-D-17-00354_R1.pdfClick here for additional data file.GIGA-D-17-00354_R2.pdfClick here for additional data file.Response_to_Reviewer_Comments_Original_Submission.pdfClick here for additional data file.Response_to_Reviewer_Comments_Revision_1.pdfClick here for additional data file.Reviewer_1_Report_ -- Li Song1/12/2018 ReviewedClick here for additional data file.Reviewer_1_Report_(Revision_1) -- Li Song3/25/2018 ReviewedClick here for additional data file.Reviewer_2_Report_ -- Mickael Orgeur1/16/2018 ReviewedClick here for additional data file.Reviewer_2_Report_(Revision_1) -- Mickael Orgeur3/21/2018 ReviewedClick here for additional data file."} +{"text": "Erwinia amylovora is the causal agent of fire blight, a devastating disease affecting some plants of the Rosaceae family. We isolated bacteriophages from samples collected from infected apple and pear trees along the Wasatch Front in Utah. We announce 19 high-quality complete genome sequences of E.\u00a0amylovora bacteriophages. Erwinia amylovora is a Gram-negative facultative anaerobic rod-shaped bacterium and the causative agent of fire blight and sequenced using 454 pyrosequencing or Illumina HiSeq 2500 sequencing . Contigs were assembled using Newbler version 2.9 and Consed and otheThe 19 phages fell into five distinct clusters according to genomic analysis. The first group included the jumbo myoviruses vB_EamM_Deimos-Minion, vB_EamM_RAY, vB_EamM_Simmy50, and vB_EamM_Special G, which share a minimum of 97.2% average nucleotide identity to one another. The second group included two jumbo myoviruses, vB_EamM_RisingSun and vB_EamM_Joad, which differ by only two putative gene products. The third group included diverse jumbo myoviruses vB_EamM_Caitlin, vB_EamM_ChrisDB, vB_EamM_EarlPhillipIV, vB_EamM_Huxley, vB_EamM_Kwan, vB_EamM_Machina, vB_EamM_Parshik, vB_EamM_Phobos, and vB_EamM_Stratton, which share a minimum of 50.5% average nucleotide identity. An additional jumbo myovirus, vB_EamM_Yoloswag, did not have any close phage relatives. Podovirus phages vB_EamP_Frozen, vB_EamP_Gutmeister, and vB_EamP_Rexella share at least 97.2% average nucleotide identity. The four jumbo myovirus groups package DNA by headful packaging based on homology of their putative terminase genes to the phiKZ terminase and GepaErwinia bacteriophages are listed in GenBank accession numbers for the 19"} +{"text": "Evidence has shown that physical activity may attenuate the negative physical, psychological and functional effects of treatment in women diagnosed with breast cancer. Physical activity levels also decline substantially during and after completion of treatment for cancer, highlighting the importance of strategies to promote participation in regular physical activity in this population. Oncologists and surgeons may serve as an influential source of motivation to be physically activity in cancer patients, by conveying the importance of a healthy lifestyle. The primary purpose of the present study was to investigate whether oncologists and surgeons routinely discuss physical activity with their breast cancer patients and to investigate the nature of any information/advice provided during consultations. A secondary aim was to examine whether physically active oncologists and surgeons were more likely to provide advice about physical activity to patients, than inactive oncologists and surgeons. A brief postal questionnaire was sent to 710 consultant breast cancer oncologists and surgeons throughout the UK and 102 responded (response rate = 14.4%). Of responders, most (55.9%) did not routinely discuss physical activity with their patients. Amongst oncologists/surgeons (clinicians) who did offer advice, most focussed on discussing the benefits of physical activity for physical and functional health gains and for facilitating weight control and maintenance. A number of clinicians indicated they advised patients that physical activity may decrease risk of recurrence and improve survival, despite the lack of evidence from RCTs to support this suggestion. There was no significant association between the physical activity status of oncologists/surgeons and the likelihood that they discussed physical activity with patients. Educational strategies aimed at encouraging clinicians to promote physical activity in consultations need to be targeted widely amongst the cancer clinician community. Evidence from RCTs has shown that physical activity may attenuate the negative effects of cancer treatment in women diagnosed with breast cancer ,2. RatesOncologists and surgeons may serve as an important source of motivation by encouraging patients to be physically active and by conveying the importance of a healthy lifestyle after cancer diagnosis. Oncologists have also been found to have a favourable attitude towards promoting exercise with cancer patients ,6. In adStudies ,6 conducAs part of the development procedures for a RCT of the effects of physical activity on breast outcome, a brief anonymous postal questionnaire (see Appendix 1) was sent (one mailing with no reminders) to 710 consultants registered with the Cancer Research UK Clinical Trials Unit database and comprised of 332 surgeons, 255 clinical oncologists and 84 medical oncologists (clinicians). Clinicians were asked to report whether they routinely provided advice to patients about physical activity during consultations and to indicate the nature and context of any advice provided. An open comment section was included in the questionnaire where clinicians were given the opportunity to provide written details of the advice they gave to patients. Clinicians were also asked to indicate who they believed would be the most suitable health professional to deliver physical activity intervention/advice to breast cancer patients . In addition, the questionnaire also included items to assess clinicians' age, gender, medical speciality and the amount of moderate intensity physical activity that they typically achieved per week.A total of 102 breast cancer consultants working across 65 sites in the UK responded to the study questionnaire (response rate = 14.4%). The majority of responders were from district hospitals (56%) and worked in specialist cancer centres (63%). Most respondents were aged 40\u201350 years (n = 44) or 50\u201360 years (n = 37). Of responders, 44.1% (n = 45) routinely gave advice to their patients about physical activity. Walking was the most commonly advocated type of activity although advice relating to the duration and intensity of physical activity that patients were encouraged to achieve varied considerably [see additional file]. The nature of the advice provided can be broadly categorised into five main themes; benefits for recurrence and mortality, benefits for weight control and management, benefits for physical and functional health, benefits of active healthy living and general comments about physical activity prescription [see additional file Chi-squared analysis showed that oncologists were significantly (p < 0.01) more likely to give patients advice about physical activity than surgeons (n = 36/60: 60.0% versus n = 9/38: 23.7%). Relatively few clinicians (36.5%: n = 33/85) were themselves meeting current public health recommendations for physical activity of at least 150 minutes of moderate intensity physical activity per week. Chi-squared analysis indicated no relationship between physical activity status and whether clinicians promoted physical activity with their patients (yes or no). Clinicians felt nurses (50%: n = 51) and physiotherapists (33.3%: n = 34) were the most suitable health professionals to deliver physical activity interventions to breast cancer patients; 11.8% indicated 'other health professional' in response to this question (e.g. fitness instructor). Only 1.9% (n = 2) of clinicians felt oncologists were the most suitable person to deliver physical activity interventions and no respondents indicated that surgeons were suitable.This study found that about 44% of clinicians gave advice to their patients about physical activity. This finding is consistent with previous research conducteOncologists were much more likely to promote physical activity with their patients than surgeons; this might be because oncologists have greater levels of contact with patients during follow-up and/or because they see patients at the completion of treatment and give formal advice on prevention of relapse. In contrast, surgeons see patients during active treatment when advice about physical activity may not be as appropriate. These findings also serve to highlight differences in practice between clinicians of different sub-specialities and help to identify where the greatest need for training might lie. Results are also in broad agreement with a previous study which foAmongst clinicians who did offer physical activity advice, most focussed on the physical and functional benefits that it can provide and its role in facilitating weight maintenance. It was interesting to note that many clinicians discussed physical activity with patients in the context that participation may decrease their risk of recurrence and improve survival. Currently there is only epidemiological evidence to support an association between physical activity and mortality from breast cancer and RCTs are still required to confirm any potential survival benefit.About a third of clinicians were meeting the current public health recommendations for physical activity per week, which is similar to rates found for the general population in the UK . Those cThis study should be interpreted in the context of several limitations. The response rate was low and those who did respond may not be representative of all oncologists and surgeons. Larger studies may be better positioned to explore these issues more precisely. However, responses were obtained from male and female clinicians of different ages located at sixty-five sites throughout the UK and this should serve to increase the generalisability of the findings. It is anticipated that responders were more likely to have some interest in physical activity than non-responders and therefore these findings may represent a 'best case scenario' regarding current practice of cancer clinicians.To the best of our knowledge this is the first study of its kind to take place in the UK and to provide information about both the qualitative and quantitative nature of the physical activity advice given to patients by oncologists and surgeons. Previous studies from North America have focussed on oncologists and less is known about the practice of cancer surgeons in relation to physical activity; the present study addresses this gap in the literature.Many clinicians discuss physical activity with their patients but a large proportion do not. Few clinicians felt that they were best placed to offer patients advice about physical activity and this may explain why many do not do so. Greater efforts need to be made to educate cancer clinicians in the UK about physical activity so that patients are routinely advised about the health benefits that participation might provide both during and after active treatment for breast cancer.RCTs: randomised controlled trials; UK: United Kingdom.The authors declare that they have no competing interests.AJD drafted the manuscript and conducted the analyses. SJB coordinated the study, helped to draft the manuscript and contributed to data analysis. All authors participated in conceptualisation and design of the questionnaire study. All authors read and approved the final version of the manuscript.1) Indicate your profession:Clinical Oncologist \u25a1 Medical Oncologist \u25a1 Surgeon \u25a12) Would you normally give patients advice regarding exercise? No \u25a1 Yes \u25a1If yes, briefly describe the advice you would give below:_____________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________3) Who would you consider most suitable to deliver an exercise intervention?Research Nurse \u25a1 Physiotherapist \u25a1Surgeon \u25a1 Oncologist \u25a1Other \u25a1 please specify:_____________________________4) We would be grateful if you could provide the following optional information about yourselfa) What is your gender? Female \u25a1 Male \u25a1b) What is your age range in years? < 30 \u25a1 30\u201340 \u25a1 40\u201350 \u25a1 50\u201360 \u25a1 > 60 \u25a1c) In a typical week how many times, and for how long, do you participate in moderate intensity exercise Number of times per week: _____________Average time per session: _____________ minutesAdvice given to breast cancer patients by consultant oncologists and surgeons in the UK. This information provided an overview of the type of advice given to breast cancer patients by consultant oncologists and surgeons.Click here for file"} +{"text": "All instances of the symbols \"t_ant_clear\" and \"t_syn_clear\" relating to Figures 4 and S2 should instead read \"N_ant_double\" and \"N_syn_double\" respectively. Similarly, all instances of \"sigma_syn/sigma_ant\" relating to Figures S2 and S3 should read \"N_ant_double/N_syn_double.\""} +{"text": "Escherichia coli bacteriophage vB_EcoP_PR_Kaz2018, isolated from a water sample. vB_EcoP_PR_Kaz2018 is a linear double-stranded DNA T7-like podophage with a genome of 39,704\u2009bp containing 45 predicted open reading frames (ORFs).Here, we describe the complete genome sequence of the Escherichia coli bacteriophage vB_EcoP_PR_Kaz2018, isolated from a water sample. vB_EcoP_PR_Kaz2018 is a linear double-stranded DNA T7-like podophage with a genome of 39,704\u2009bp containing 45 predicted open reading frames (ORFs).Here, we describe the complete genome sequence of the Escherichia coli bacteriophage vB_EcoP_PR_Kaz2018 was isolated from a water sample and is capable of infecting encapsulated Escherichia coli expressing the K1 capsular antigen, a major causative agent of neonatal septicemia, sepsis, and meningitis . The DNA library was prepared using the Nextera XT DNA sample preparation kit (Illumina). Whole-genome sequencing was performed with an Illumina MiSeq sequencing platform. Low-quality reads were filtered and adapters trimmed with Trimmomatic (Enterobacter phage K1F (GenBank accession number AM084414), with 100% query coverage and 95.28% identity; Escherichia phage LM33_P1 (LT594300), with 86% query coverage and 95.22% identity; Escherichia phage PE3-1 (KJ748011), with 85% query coverage and 95.13% identity; and Escherichia phage YZ1 (MG845865), with 84% query coverage and 94.73% identity.The genomic DNA was extracted from the phage lysate of the mmomatic from themmomatic . As a remmomatic . The avemmomatic , 6. The mmomatic , 8, nameE. coli.The complete genome of phage vB_EcoP_PR_Kaz2018 is a linear double-stranded DNA (dsDNA) of 39,704\u2009bp, with 178-bp-long terminal repeats. The GC content of the genome was 49.8%. Gene prediction was done using GeneMark and PHAST . A totalEscherichia phage vB_EcoP_PR_Kaz2018 was deposited in GenBank under the accession number MN510331. Raw sequence reads are available under BioProject accession number PRJNA574554.The complete genome sequence of the"} +{"text": "Although the de\u00a0novo transcriptome assembly of non-model organisms has been on the rise recently and new tools are frequently developing, there is still a knowledge gap about which assembly software should be used to build a comprehensive de\u00a0novo assembly.In recent years, massively parallel complementary DNA sequencing (RNA sequencing [RNA-Seq]) has emerged as a fast, cost-effective, and robust technology to study entire transcriptomes in various manners. In particular, for non-model organisms and in the absence of an appropriate reference genome, RNA-Seq is used to reconstruct the transcriptome de\u00a0novo assembly tools are applied to 9 RNA-Seq data sets spanning different kingdoms of life. Overall, we built >200 single assemblies and evaluated their performance on a combination of 20 biological-based and reference-free metrics. Our study is accompanied by a comprehensive and extensible Electronic Supplement that summarizes all data sets, assembly execution instructions, and evaluation results. Trinity, SPAdes, and Trans-ABySS, followed by Bridger and SOAPdenovo-Trans, generally outperformed the other tools compared. Moreover, we observed species-specific differences in the performance of each assembler. No tool delivered the best results for all data sets.Here, we present a large-scale comparative study in which 10 de\u00a0novo transcriptome assembly.We recommend a careful choice and normalization of evaluation metrics to select the best assembling results as a critical step in the reconstruction of a comprehensive Even though a reference genome is available, it is still recommended to complement a gene expression study by a de\u00a0novo transcriptome assembly to identify transcripts that have been missed by the genome assembly process or are just not appropriately annotated\u00a0) has established itself as a powerful technique to understand versatile molecular mechanisms and to address various biological questions\u00a0. In partnnotated\u00a0.At first glance, the transcriptome assembly process seems similar to genome assembly, but actually, there are fundamental differences and various challenges. On the one hand, some transcripts might have a shallow expression level, while others are highly expressed\u00a0,4,6. Espde\u00a0novo transcriptome assembly of non-model organisms has been on the rise recently, and new tools are frequently developed. Now there is a knowledge gap: which assembly software and parameter settings should be used to construct a good assembly? In addition, there is no consensus about which metrics should be used to evaluate the quality of multiple de\u00a0novo transcriptome assemblies.The de\u00a0novo transcriptome assembly\u00a0\u00a0) are staupported\u00a0.IDBA-Tran (v1.1.1), a novel assembly tool that claims to be more robust regarding uneven expression levels in RNA-Seq data\u00a0 64-bit). Of course, how easily a tool can be installed and executed depends heavily on the machine used, the server setup, and how familiar the user is with the programing language the tool is based on. Nevertheless, it should be the goal of each publicly available piece of software to be as user-friendly as possible. Therefore, we collected our experiences during the installation and execution of each assembler to share our observations Table\u00a0.Electronic Supplement publicly available at www.rna.uni-jena.de/supplements/assembly\u00a0[doi.org/10.17605/OSF.IO/5ZDX4\u00a0[BUSCO and TransRate as well as other results are also archived in the GigaScience GigaDB respository\u00a0[A comprehensive assembly\u00a0 accompanIO/5ZDX4\u00a0. AdditioS1Supplementary Table : Data sets and preprocessing.S2Supplementary Table : Assembly tools.S3Supplementary Files : Executed assembly commands.S4Supplementary Figures : HISAT2 re-mapping rate.S5Supplementary Figures : rnaQUAST statistics.S6Supplementary Tables : TransRate.S7Supplementary Figures : ExN50.S8Supplementary Figures : BUSCO.S9Supplementary Tables : DETONATE.S10Supplementary Tables : Selected main metrics.S11Supplementary Figures : Runtime and memory consumption.S12Supplementary Figures : -normalized scores per data set and metric.GIGA-D-18-00307_Original_Submission.pdfClick here for additional data file.GIGA-D-18-00307_Revision_1.pdfClick here for additional data file.GIGA-D-18-00307_Revision_2.pdfClick here for additional data file.GIGA-D-18-00307_Revision_3.pdfClick here for additional data file.Response_to_Reviewer_Comments_Original_Submission.pdfClick here for additional data file.Response_to_Reviewer_Comments_Revision_1.pdfClick here for additional data file.Response_to_Reviewer_Comments_Revision_2.pdfClick here for additional data file.Reviewer_1_Report_Original_Submission -- Andrey D. Prjibelski, M.Sc.9/26/2018 ReviewedClick here for additional data file.Reviewer_1_Report_Revision_1 -- Andrey D. Prjibelski, M.Sc.2/7/2019 ReviewedClick here for additional data file.Reviewer_2_Report_Original_Submission -- Brian Haas9/27/2018 ReviewedClick here for additional data file.Reviewer_2_Report_Revision_1 -- Brian Haas1/27/2019 ReviewedClick here for additional data file.Homo sapiens; KC: k-mer compression; MK: multiple k-mer; MS: metric score; nt: nucleotides; OMS: overall metric score; RNA-Seq: RNA sequencing.BUSCO: benchmarked universal single-copy orthologs; Chr1: chromosome 1; EBOV: Ebola virus; HSA: The authors declare that they have no competing interests.This work has been funded by the German Research Foundation (DFG) projects Collaborative Research Center/Transregio 124\u2014\u201cPathogenic fungi and their human host: Networks of Interaction,\u201d subproject B5; DFG SPP-1596\u2014\u201cEcology and species barriers in emerging viral diseases\u201d; and CRC 1076 \u201cAquaDiva\u201d, subproject A06.M.M. conceived the research idea. M.H. designed the project, performed calculations and analysis, interpreted the data, and wrote the main manuscript. M.M. contributed in discussions and in proofreading the final manuscript. This work is part of the doctoral thesis of M.H. All authors read and approved the final manuscript."} +{"text": "The gene LsWRKY (Lsat_1_v5_gn_6_12161) incorrectly did not appear in"} +{"text": "Helicobacter pylori causes gastric cancer in 1\u20132% of cases but is also beneficial for protection against allergies and gastroesophageal diseases. An estimated 85% of H. pylori\u2013colonized individuals experience no detrimental effects. To study the mechanisms promoting host tolerance to the bacterium in the gastrointestinal mucosa and systemic regulatory effects, we investigated the dynamics of immunoregulatory mechanisms triggered by H. pylori using a high-performance computing\u2013driven ENteric Immunity SImulator multiscale model. Immune responses were simulated by integrating an agent-based model, ordinary, and partial differential equations.In silico validation showed that colonization with H. pylori decreased with a decrease in epithelial cell proliferation, which was linked to regulatory macrophages and tolerogenic dendritic cells.The outputs were analyzed using 2 sequential stages: the first used a partial rank correlation coefficient regression\u2013based and the second a metamodel-based global sensitivity analysis. The influential parameters screened from the first stage were selected to be varied for the second stage. The outputs from both stages were combined as a training dataset to build a spatiotemporal metamodel. The Sobol indices measured time-varying impact of input parameters during initiation, peak, and chronic phases of infection. The study identified epithelial cell proliferation and epithelial cell death as key parameters that control infection outcomes. H. pylori infection identified epithelial cell proliferation as a key factor for successful colonization of the gastric niche and highlighted the role of tolerogenic dendritic cells and regulatory macrophages in modulating the host responses and shaping infection outcomes.The hybrid model of The outputs from the first stage (152 \u00d7 20 runs) and second stage (115 \u00d7 20 runs) were combined to provide the training data to build a spatiotemporal metamodel. For the second-stage analyses, we utilized a metamodeling-based approach. Metamodels are surrogate models that can be used as a substitute for the simulation model , cov[M], ..., cov[M]T, that contains spatial covariance between the k design points and a given prediction point kxk covariance matrix of the vector of simulation errors associated with the vector of point estimates k is the kx 1 vector of ones and Similar as in Ankenman et al. , shown in et al. , the besequation :(4)\\docn et al. is givenTo implement the metamodeling approach as described above, the unknown model parameters are estimated through maximizing the log-likelihood function. The underlying standard assumption is that .g., see and 45]\\document.g., see is used To determine the effect of input variables on the output, we employed the variance decomposition method. These methods involve the decomposition of the variance of the output as a sum of the variance produced by each input parameter .Second-stage analysis, and the Sobol\u2019 indices were computed as described in were shown to have positive influence on the resident macrophage cells, whereas the T-cell type transition parameters [p_iTregtoTh17 = (0.3\u20130.4) and p_Th17toiTreg = (0.1\u20130.2)] showed a negative impact on the resident macrophages. Similarly, we performed the PRCC analysis for all the cell populations under consideration during the infection (not shown).Global sensitivity analysis) were selected to be varied for the second-stage design. All the selected inputs are shown in The significant parameters (marked in blue bars) obtained from the SA of the output from the first-stage design of experiments , using the mlegp package in R obtained. The diagnostic plots denote the black circles, which are the cross-validated prediction. Cross-validation is in the sense that for predictions made at design point x, all observations at design point x are removed from the training set. The lower panel represents the residual plots for the cell populations: (a) H. pylori, (b) resident macrophages, (c) monocyte-derived macrophages in the LP, and (d) tolerogenic DCs in the GLN compartment.ABM: agent-based model; DC: dendritic cell; ENISI MSM: Enteric Immunity Simulator multiscale modeling; GLN: gastric lymph node; GP: Gaussian process; HPC: high-performance computing; IFN-\u03b3: interferon \u03b3; IL: interleukin; iTreg: induced regulatory T; KO: knockout: LP: lamina propria; LSODA: Livermore Solver for Ordinary Differential Equations; ODE: ordinary differential equation; PDE: partial differential equation; PPAR\u03b3: peroxisome proliferator-activated receptor \u03b3; PRCC: partial rank correlation coefficient; RRID: research identification initiative ID; SA: sensitivity analysis; SRCC: Spearman rank correlation coefficient; STAT3: signal transducer and activator of transcription 3; TGF-\u03b2: transforming growth factor \u03b2; Th: T helper; Treg: regulatory T; WT: wild type.The authors declare that they have no competing interests.www.nimml.org). The funding body had no role in the design of the study, data collection, analysis, interpretation of data, and writing of the manuscript.This work was supported by the Defense Threat Reduction Agency (DTRA) grant HDTRA1\u201318-1\u20130008 to J.B.R. and R.H. and funds from the Nutritional Immunology and Molecular Medicine Laboratory (H. pylori experimental data. J.B.R., V.A., and R.H. supervised the project. J.B.R. and R.H. edited the manuscript. J.B.R., A.L., N.T.J., S.H., V.A., X.C., and R.H. participated in discussions on the model and results. All authors provided critical feedback on the project.M.V., R.H., and J.B.R. formulated the model, implemented, performed the simulations, analyzed model-generated outputs, made the figures, and wrote the manuscript. M.V., A.L., J.B.R., R.H., and S.H. formulated the model. S.H., A.L., and V.A. implemented the code architecture and benchmarked the parallel version of the hybrid model. X.C. and M.V. wrote the codes for global sensitivity analysis and generated the design matrices. N.T.J. generated macrophage and giz062_GIGA-D-18-00435_Original_SubmissionClick here for additional data file.giz062_GIGA-D-18-00435_Revision_1Click here for additional data file.giz062_GIGA-D-18-00435_Revision_2Click here for additional data file.giz062_GIGA-D-18-00435_Revision_3Click here for additional data file.giz062_GIGA-D-18-00435_Revision_4Click here for additional data file.giz062_Response_to_Reviewer_Comments_Original_SubmissionClick here for additional data file.giz062_Response_to_Reviewer_Comments_Revision_1Click here for additional data file.giz062_Response_to_Reviewer_Comments_Revision_2Click here for additional data file.giz062_Response_to_Reviewer_Comments_Revision_3Click here for additional data file.giz062_Reviewer_1_Report_Original_SubmissionChang Gong -- 12/9/2018 ReviewedClick here for additional data file.giz062_Reviewer_1_Report_Revision_1Chang Gong -- 1/28/2019 ReviewedClick here for additional data file.giz062_Reviewer_2_Report_Original_SubmissionElsje Pienaar -- 12/17/2018 ReviewedClick here for additional data file.giz062_Reviewer_2_Report_Revision_1Elsje Pienaar -- 1/31/2019 ReviewedClick here for additional data file.giz062_Reviewer_2_Report_Revision_2Elsje Pienaar -- 2/18/2019 ReviewedClick here for additional data file.giz062_Reviewer_3_Report_Revision_2Paul Macklin, Ph.D. -- 2/25/2019 ReviewedClick here for additional data file.giz062_Reviewer_3_Report_Revision_3Paul Macklin, Ph.D. -- 3/21/2019 ReviewedClick here for additional data file.giz062_Reviewer_3_Report_Revision_4Paul Macklin, Ph.D. -- 4/7/2019 ReviewedClick here for additional data file.giz062_Supplement_FilesClick here for additional data file."} +{"text": "Diabetes is a significant health concern with more than 30 million Americans living with diabetes. Onset of diabetes increases the risk for various complications, including kidney disease, myocardial infractions, heart failure, stroke, retinopathy, and liver disease. In this paper, we study and predict the onset of these complications using a network-based approach by identifying fast and slow progressors. That is, given a patient\u2019s diagnosis of diabetes, we predict the likelihood of developing one or more of the possible complications, and which patients will develop complications quickly. This combination of \"if a complication will be developed\u201d with \u201dhow fast it will be developed\u201d can aid the physician in developing better diabetes management program for a given patient. Diabetes is a significant public health concern in the United States. According to the Center for Disease Control (CDC), in 2015 it was estimated that 30.3 million people have diabetes, with 23.1 million cases diagnosed and 7.2 million undiagnosed . 90 to 9To achieve the objective of predictability of onset of complications, we first represent a patient\u2019s disease history as a network based on what happens in the second year after a diabetes diagnosis. Genetic determinants and other independent accelerating factors of the complications of diabetes clearly The proposed network developed in this study will not only provide a useful modeling construct but also a mechanism for visualizing disease complications. The use of networks to understand disease progression has been studied before, such as in Alzheimer\u2019s and hearWe use a large data set comprising of Type 2 Diabetes patients in Indiana, collected over 20 years obtained through the Regenstrief Institute. This data includes both diagnosis codes taken from the International Statistical Classification of Diseases and Related Health Problems, Ninth Revision and Tenth Revision, and clinical laboratory test results. Researchers have had success using ICD codes to predict future disease states . We creaPredicting diabetic complications is incredibly challenging due to the inequality of healthcare consumption and the speed at which patients receive diagnoses. In our work, we posit that by establishing appropriate thresholds and choosing balanced populations, we can ensure that even patients who infrequently visit their physician can still benefit from our models.The Regenstrief Institute created one of the earliest electronic medical record systems in 1972 to support research and continues to handle the research use of the INPC (Indiana Network for Patient Care) database . With thIn a collaboration of Indiana Biosciences Research Institute, Regenstrief Institute, and industrial partners, a primary data set of type 2 diabetes mellitus (T2DM) patients was created. Using inclusion criteria of one T2D diagnosis code OR a laboratory glycated hemoglobin HbA1C) test results \u2265\u20096.5% OR at least one Medi-Span-defined anti-diabetes medication where the patients were \u2265\u200918 years of age on date of first inclusion criteria. Using this criteria, a primary T2DM cohort of 805,867 individuals was identified from INPC over 20 years (1995-2015). The demographics, diagnosis codes, medical procedures, prescriptions, and results from clinical laboratory tests were extracted for these individuals. This exC test reTo clean this T2DM data set, the extracted INPC data placed on a secure Amazon Web Services (AWS) server. This large T2DM dataset across 20 years was multi-modal and there were many missing parameters across the records, as well as inconsistency in the measurements identified by error codes or per-patient longitudinal analysis or out of range values. In addition, we had to take into account the correction of features that were reported for quality control (QC) checks. To that end, we implemented a comprehensive a data cleaning framework to normalize the features, remove bad or missing values, and have consistent units of measure was done using PySpark. The feature values were normalized and extreme values were identified and filtered on minimum and maximum values ever measured for a parameter. Additionally, if any values were +/- 2 standard deviations from the median, they were filtered. Also, we looked for more than two distribution patterns in the data where potentially two different units of measure were applied to the same variable, which could indicate a problem with poor previous data integration. After this extensive effort to clean all the issues from this \"real-world\" captured data set from INPC, an \"analysis-ready\" data set was created for the modeling. An overview of the size of the different data tables is given in Table\u00a0Type 2 diabetes mellitus - ICD9/ICD10 codes 249, 250, 357.2, 362.[01-07], 366.41, E10, E11Kidney disease - ICD9/ICD10 codes 584, 586, 585, 403, 404, 581, 583, 588, N18, N17, N19, I12, I13, N04, N05, N08, N25, 593Liver disease - ICD9/ICD10 codes 571, 572, 573, K76, K75Heart failure - defined as ICD9/ICD10 codes 428, I50Myocardial infarction - ICD9/ICD10 codes 410, 412, I21Stroke - ICD9/ICD10 codes 435, G45, 430, 431, I60, I61, 432, I62, 436, 433, 434Retinopathy - ICD9/ICD10 codes 362, H35We use the following to categorize primary T2DM diagnoses and complications: We further sample to create the following data about patients: patient diagnosis, which contains all the diagnoses codes (ICD-9/ICD-10) received by a patient, demographics, which contains age, gender, and race/ethnicity information, and clinical variables, which contains metabolic measurements taken while at the doctor\u2019s office. Header files for the diagnosis table is given in Table\u00a0We detail the network construction in Algorithm 1, and network pruning in Algorithm 2. We retain a listing of the edges and nodes that represent the fast paths to diabetic complications, along with the nodes that result in the largest information gain.There are three primary data sources that we use to build our models: patient demographic data, which remains constant throughout the duration of the study and is represented by nodes at the beginning of the network at time zero; patient diagnosis, which contains all the diagnoses that occur over the course of a patient\u2019s visit with a doctor or healthcare provider; and clinical variables, which contain all the available measurements and laboratories tests available in the patient\u2019s health records as contained in INPC.We tested the following clinical variables and grouped them into quartiles, which were included in the clinical variables file: non high-density lipoprotein cholesterol (Non-HDL C), low-density lipoprotien (LDL) high-density lipoprotein (HDL) ratio, thyroid-stimulating hormone (TSH), fibrosis-4 (Fib 4) index, total cholesterol, low-density lipoprotein cholesterol (LDL C), high-density lipoprotein cholesterol (HDL C), cholesterol ratio, total bilirubin, basophil platlet count (PC), monocyte count, aspartate transaminase to platelet ratio index (APRI), neutrophil count, albumin, alkaline phosphatase (ALP), aspartate transaminase (AST) alanine transaminase (ALT) ratio. eosinophil PC, protein, HbA1C, ALT, estimated glomerular filtration rate (eGFR), AST, lymphocyte PC, calcium, red blood PC, platelet count, mean corpuscular volume (MCV), mean corpuscular hemoglobin (MCH), glucose, blood urea nitrogen (BUN), chloride, creatinine, and carbon dioxide (CO2).Additionally included in the clinical variables file were the following variables, pre-processed into normal and abnormal statuses: weight classification, HDL C, high serum creatinine, high urine glucose, hyperglycemia, hypertension, hypertriglyceridemia, impaired fasting glycemia (IFG), impaired glucose tolerance (IGT), LDL C, and triglycerides. Finally, we also quartile the age of the patients so that we have large groups to test on. Then every piece of information in a patient history is linked all other nodes, thus creating a heterogeneous network. An example of the network is given in Fig.\u00a0After building the network, we prune it by discarding any edges that do not contain statistically significant differences between the fast and slow progressors as defined by using a two-proportion Z test score.To determine if a patient is a slow or fast progressor, the nodes and edges of the sub-network that match the patient\u2019s medical history are traversed and their individual probability of developing a complication is computed. We assume that the node and edge weights, corresponding to the percentages of patients who suffer from that complication that are contained by that node or edge, are equally likely and statistically independent. These weights are multiplied together to get the probability of being a fast progressor. To decrease noise, we experimentally concluded that the weights, or percent likelihood of developing the specific complication of diabetes, corresponding to the top 12 most significant edges and nodes are used as determined by the two-proportion Z-test. In other words, for each individual patient, we only used the most significant parts of their individual network to predict whether or not that patient was a fast or slow progressor. The average AUC values from each of these experiments is shown in Table\u00a0w0,...,wn correspond to the n most significant edge and node weights as determined by the two-proportion Z-test, where n\u226412. Remove wh from the computation, which corresponds to the lowest probability of developing the complication. Let The method to compute the probability that an individual will be a fast or slow progressor is: Let Only information in patient history that occurred in the second year following a Type 2 diabetes diagnosis is considered. Healthy patients survive longer than sickly ones, so if we extend our analysis for too long after a diabetes diagnosis, the data will become biased towards healthy patients. Patients tend to move and change doctors, and analyzing what occurs in the second year after the diagnosis will ensure that many patients are still in the system. We can see in Fig.\u00a0Diagnoses are truncated to the first three digits of the ICD-9 or ICD-10 code to remove the disease subtypes and only focus on the primary diagnoses.All nodes that are not shared by at least one percent of the population are removed.All patients that have received less than five diagnoses or more than twice the median amount of diagnoses are removed. This assists with biases introduced by individuals having an excessive medical history or too few observations.The cleaned dataset is sampled to ensure that our fast and slow progressors have the same number of patients.The significance on the edges is computed and any edges that do not test for a two-proportion z-test with 95 percent confidence are removed.Fast progressors are defined as patients who develop a complication of diabetes faster than 75 percent of the population. All patients from our dataset who develop the complication before being diagnosed with diabetes, or up to one year afterwards are removed.Slow progressors are defined as patients who develop a complication of diabetes slower than 75 percent of the population. Everyone retained in our network is eventually diagnosed with the complication which assists in making sure the datasets are balanced and with limited bias.Every node and every edge is given a Z-score, which corresponds to the likelihood of a significant difference between fast and slow progressors. Every node and edge will be given the percent likelihood that a patient who has the condition given in the node, or combination of conditions as represented by an edge, will be a fast or slow progressor.We only consider new diagnoses that occur after a diabetes diagnosis. We do not consider diagnoses or lab values that occurred before the type 2 diabetes diagnosis. Incorporating past values might be included in future work. Our test set contained 20 percent of our patients. The percent likelihood of their complication development was computed against the patient network generated from the 80 percent training set. We queried the large network for nodes and edges corresponding to an individual patient\u2019s disease history. Because all the edges that failed to show a significant difference between the fast and slow progressors were pruned, the sub-network might be disconnected. The top five conditions that lead to each complication by percentage of fast progressors and Z-score are given in Table\u00a0The results for these predictions of fast progressors for onset of these various diabetic complications are shown in Table\u00a0Diabetic complications are often correlated with one another, which might reflect the generalized damage that the body has taken from a micro and macrovascular perspective . Others Many of the top confidence nodes are shared between different complications. Correlations between the fast and slow progressors are given in Table\u00a0In our future work, we would like to examine the false positives and identify what causes them to not develop complications immediately, even though their diagnosis history and lab results identify them as fast progressors. This will inform health management strategies \u2013 lifestyle, behavioral or environmental factors \u2013 in addition to the medication to manage diabetes. We believe this analysis should help enable recommendations for diabetic patients to limit development of complications.Given a patient\u2019s disease history and lab results, we can predict their likelihood of developing complications from diabetes. We also show what disease diagnoses or lab results (from our heterogeneous network or graph) are most likely to lead to specific diabetic complications. We reaffirm that diabetes is a complicated disease. It continues to be important for diabetic patients to manage their disease and be aware of the complications. The diagnoses graphs can help illuminate health problems faced by many patients and what might be the best course of disease management. Not managing complications, especially for fast progressors, can cause rapid development of uncontrolled diabetes, from which it is hard to recover. Moreover, disease diagnoses graphs can also be a useful tool for physicians to understand the effects of co-morbid conditions, and personalize a wellness and disease management plan. This can lead to an improvement in both individual and population health outcomes.Below is a list of data columns included in the clinical variables file: STUDYID, AGE, DAYS_VIS_INDEX, GENDER, INDEX_AGE, angiotensin converting enzyme (ace), acetaminophen, acetone, act, albumin, albumin_creatinine_ratio, albumin_globulin_ratio, alcohol_pc, aldolase, aldosterone, alp, alp_bone_isoenzyme, alpha_1_antitrypsin, alpha_1_globulin, alpha_2_globulin, alpha_tocopherol, alt, ammonia, amylase, anion_gap, aorta_sinuses_diam, aortic_root_diam, aov_peak_pressure, aov_peak_velocity, apri, arterial_diastolic_bp, ast, ast_alt_ratio, antithrombin iii (atiii), band count (cnt), band_pc, bard_score, base_excess, basophil_count, basophil_pc, beta2_microglobulin, beta_globulin, beta_hydroxybutyrate, bicarbonate, blast_count, blast_pc, body mass index (bmi), body_surface_area, bun, bun_cr, bun_post_dialysis, bun_pre_dialysis, complement 3 (c3), complement 4 (c4), c_peptide, calciferol, calcium, calcium_albumin, carboxyhemoglobin, cyclic citrullinated peptide (ccp), cluster of differentiation (cd) 2_t_cells, cd3_t_cells, cd4_cd8_ratio, cd4_helper_t, cd4_t_cells, cd8_supprs_t_cells, cd8_t_cells, carcinoembryonic antigen (cea), cell_count, chloride, cholecalciferol, cholesterol_ratio, creatine kinase (ck)_bb), ck_index, ck_mb, ck_mb_tot, ck_mm, ck_total, chronic kidney disease (ckd)_stage, co2, colony_count, conjugated_bilirubin, cortisol, creatinine, creatinine_ck, creatinine_clear, c-reactive protein (crp), central venous pressure (cvp), d_dimer, (dehydroepiandrosterone) dhea_s, diabetic_nephropathy_status, diabetic_status, diastolic_bp, diastolic_bp_standing, direct_bilirubin, epstein-barr (ebv)_antibody, eGFR, eosinophil_count, eosinophil_pc, esr, estradiol_unconjugated, estrogen, factor_viii_activity, fasting_glucose, forced expiratory flow (fef)25_75, ferritin, fib_4_index, fibrinogen, fraction of inspired oxygen (fio2), folate, free_lambda, fructosamine, follicle-stimulationg hormone (fsh), gamma-glutamyl transpeptidase (ggt), globulin, glucose, glucose_gtt_1h, glucose_gtt_1hr_ob, glucose_gtt_2h, glucose_gtt_3h, glucose_gtt_pp, hba1c, hdl_c, hdl_c_status, hdl_ cholesterol (chol), hdl_ldl, height, hepatitis (hepb)_ab, hemoglobin (hgb), hemoglobin a2 (hgb_a2), high_serum_creatinine_status, high_urine_glucose_status, histamine, homeostatic model assessment of beta cell function (homa_b), homeostatic model assessment of insulin resistance (homa_ir), homocysteine, hyperglycemia_status, hypertension_status, hypertriglyceridemia_status, ifg_status, immunoglobulin a (iga), immunoglobulin e (ige), insulin-like growth factor 1 (igf_1), immunoglobulin g (igg), immunoglobulin m (igm), igt_status, immature_granulocytes_pc, indirect_bilirubin, insulin, iron, interventricular septum (ivs)_thickness, left atrium (la)_diameter, lactate, lactate_dehydrogenase, lactic acid dehydrogenase (ldh)_1, ldh_2, ldh_3, ldh_4, ldh_5, ldl_c, ldl_c_status, ldl_hdl_ratio, lh, lipase, lipoprotein (lpa), left ventricle (lv)_mass, lv_stroke_volume, lv_systolic_volume, left ventricular outflow tract (lvot)_peak_gradient, lvot_peak_velocity, left ventricular posterior wall (lvpw)_thickness_diastolic, lymphocyte_atypical, lymphocyte_count, lymphocyte_pc, lymphocyte_reactive, lymphocyte_variant, lymphotycte cerebrospinal fluid (csf), macrophage_pc, map, mch, mcv, mean_arterial_pressure, mean_glucose_bld_ghb_test, mesothelial_cells_pc, metamyelocytes_count, metamyelocytes_pc, methemoglobin, methemoglobin_pc, mixed_mono_count, mixed_mono_pc, monocyte_count, monocyte_csf_pc, monocyte_pc, myelocyte_count, myelocyte_pc, nafld_fibrosis_score, neutrophil_count, neutrophil_pc, non_hdl_c, nucleated red blood cells (nrbc)_count, nrbc_pc, nrbc_white blood cell (wbc), N-terminal pro b-type natriuretic peptide (nt_probnp), nucleated_cell_count, oxygen (o2), oxyhemoglobin_pc, p_wave_offset, p_wave_onset, partial pressure of carbon dioxide (pco2), ph, phosphorus, platelet_count, partial pressure of oxygen (po2), poly_count, poly_pc, potassium, pr_interval, pre_diabetic_status, progesterone_17_OH, promyelocytes_count, prostate_free, prostrate_total, protein, pulse, qt_corrected, quantitative insulin-sensitivity check index (quicki), red blood cell distribution width (rdw), red_blood_cell_count_csf, red_blood_pc, renal_exocrine pancreatic insufficiency (epi)_cells, respiratory_rate, selenium, serum_osmolality, smudge_cell_count, sodium, systolic_bp, systolic_bp_standing, triiodothyronine (t3)_free, t3_total, thyroxine (t4)_free, t4_total, t_wave_axis, t_wave_offset, temperature, testosterone_free, testosterone_total, total iron binding capacity (tibc), total_bilirubin, total_cholesterol, triglyceride_hdl_ratio, triglycerides, triglycerides_status, troponin, troponin_2h, tsh, urine albumin-to-creatinine ratio (uacr), unconjugated_billirubin, uric_acid, urine_albumin, urine_ascorbate, urine_bacteria, urine_billirubin, urine_cast, urine_chloride, urine_cortisol_free, urine_creatinine, urine_creatinine_24, urine_crystals, urine_epithelial_cells, urine_gamma_globulin, urine_glucose, urine_granular_cast, urine_hgb, urine_hyaline_cast, urine_ketones, urine_microalbumin, urine_microalbumin_24, urine_microalbumin_creatinine_ratio, urine_microalbumin_creatinine_ratio_24, urine_potassium, urine_protein, urine_protein_24, urine_protein_creatinine_ratio, urine_red blood cells (rbc), urine_specific gravity (sp_grav), urine_squaous_epithelial (epi)_cells, urine_trans_epi_cells, urine_urea_nitrogen, urine_urobilinogen, urine_waxy_cast, vitamin (vit)_a, vit_b1, vit_b12, vit_d2, vit_25-hydroxyvitamin d2(d2_25_oh), very low-density lipoprotein (vldl), vldl_c, waist_circumference, wbc_count, wbc_count_csf, weight, weight_classification, zinc, CARDIOVASCULAR, NEPHROPATHY, LIVER, OUTCOME"} +{"text": "Scientific Reports 10.1038/s41598-019-52700-w, published online 12 November 2019Correction to: In this Article, there is a typographical error in the Data availability section, where:https://gin.gnode.org/USZ_NCH/Scalp_EEG_HFO.\u201d\u201cThe EEG data and HFO markings are freely available at should read:https://gin.g-node.org/USZ_NCH/Scalp_EEG_HFO.\u201d\u201cThe EEG data and HFO markings are freely available at"}